Biological to Artificial and Back: How a Core AI Algorithm May Work in the Brain

Blame is the main game when it comes to learning.

I know that sounds bizarre, but hear me out. Neural circuits of thousands, if not more, neurons control every single one of your thoughts, reasonings, and behaviors. Take sewing a face mask as an example: somehow, a set of neurons have to link up in specific ways to make sure you don’t poke your finger with a sharp needle. You’ll fail at the beginning before getting better at protecting your hand while sewing uniform stitches with efficiency.

So the question is, of those neurons that eventually allow you to sew with ease, which ones—or which connections between which ones—where to blame initially for your injuries? Are those ones responsible for your eventual proficiency? How exactly does the brain learn through mistakes?

In a new paper, some of the brightest minds in AI—including Dr. Geoffrey Hinton, the godfather of deep learning, and folks at DeepMind, the poster child of neuro-AI crossovers—argue that ideas behind a core algorithm that drives deep learning also operate within the brain. The algorithm, called backpropagation, was the spark that fired up the current revolution of deep learning as the de facto machine learning behemoth. At its core, “backprop” is an extremely effective way to assign blame to connections in artificial neural networks and drive better learning outcomes. While there’s no solid proof yet that the algorithm also operates in the brain, the authors laid out several ideas that neuroscientists could potentially test out in living brain tissue.

It’s a highly controversial idea, partly because it was brought up years ago by AI researchers and refuted by neuroscientists as “biologically impossible.” Yet recently, the bond between deep learning techniques and neuroscience principles has become increasingly entangled in a constructive feedback circle of ideas. As the authors argue, now may be a good time to revisit the possibility that backpropagation—the heart of deep learning—may also exist in some form in biological brains.

“We think that backprop offers a conceptual framework for understanding how the cortex learns, but many mysteries remain with regard to how the brain could approximate it,” the authors conclude. If true, it means that somehow our biological brains came up with principles for designing artificial ones that, incredibly, loosely reflect evolution’s slow sculpting of our own brains through genes. AI, the product of our brains, will amazingly become a way to understand a core mystery of how we learn.

Let’s Talk Blame

The neuroscience dogma of learning in the brain is the idea of “fire together, wire together.” In essence, during learning, neurons will connect to each other through synapses into a network, which slowly refines itself and allows us to learn a task—like sewing a mask.

But how exactly does that work? A neural network is kind of like a democracy with individuals who are only in contact with their neighbors. Any single neuron only receives input from its upstream partner, and passes along information to its downstream ones. In neuroscience parlance, how strongly these connections are depend on “synaptic weights”—think of it as a firmer or looser handshake, or transfer of information. Stronger synaptic weight isn’t always better. The main point of learning is to somehow “tune” the weights of the entire population so that the main outcome is the one we want—that is, stitching cloths rather than pricking your finger.

Think of it as a voting scenario in which neurons are individual voters who are socially isolated and only in contact with their immediate neighbors. The community, as a whole, knows who they want to vote for. But then an opponent gets elected—so the question is, where did things go awry, and how can the network as a whole fix it?

It’s obviously not a perfect analogy, but it does illustrate the problem of assigning blame. Neuroscientists can generally agree that neural networks adjust synaptic weights of their neuron members to “push” the outcome towards something better—a process we call “learning.” But in order to adjust weights, first the network has to know which connections to adjust.

Enter backpropagation. In deep learning, which consists of multiple layers of artificial neurons connected to each other, the same blame problem exists. Back in 1986, Hinton and his colleagues David Rumelhart and Ronald Willliams found that as information travels across different neural layers, by observing how far the output misses its mark from the desired one, it’s possible to mathematically compute an error signal. This signal can then be passed back through the neural network layers, with each layer individually receiving a new error signal based on its upper layers. Hence, the name “backpropagation.”

It’s kind of like five people passing each other a basketball in a line, and the last throw misses. The coach—in this case, backpropagation—will start from the final player, judge how likely it was his or her problem, and move back down the line to figure out who needs adjustment. In an artificial neural network, “adjustment” means changing the synaptic weight.

The next step is for the network to compute the same problem again. This time around, the ball goes in. That means whatever adjustments the backprop coach made worked. The network will adopt the new synaptic weights, and the learning cycle continues.

Backprop in the Brain?

Sound like a logical way of learning? Totally! Backprop, in combination with other algorithms, has made deep learning the dominant technique in facial recognition, language translation, and AI’s wins against humans in Go and poker.

“The reality is that in deep neural networks, learning by following the gradient of a performance measure works really well,” the authors said. Our only other measure of efficient learning is our own brain—so is there any chance that the ideas behind backprop also exist in the brain?

30 years ago the answer was a “hell no.” Many reasons exist, but a main one is that artificial neural networks aren’t set up the way biological ones are, and the way backprop mathematically works just can’t literally translate to what we know about our own brains. For example, backprop requires an error signal to travel along the same paths as the initial “feed-forward” computation—that is, the information pathway that initially generated the result—but our brains aren’t wired that way.

The algorithm also changes synaptic weights through a direct feedback signal. Biological neurons, in general, don’t. They can change their connections through more input, or other types of regulation—hormones, chemical transmitters, and whatnot—but using the same physical branches and synapses for both forward and feedback signals, while not getting them mixed up, was considered impossible. Add to that the fact that synapses are literally where our brains store data, and the problem becomes even more complicated.

The authors of the new paper have a rather elegant solution. The key is to not take backprop literally, but just adopt its main principles. Here are two as an example.

One, if the brain can’t physically use feedback signals to change its synaptic weights, we do know that it uses other mechanisms to change its connections. Rather than an entire biological network using the final outcome to try to change synaptic weights at all levels, the authors argue, the brain could instead alter the ability of neurons to fire—and in turn, locally change synaptic weights so that the next time around, you don’t prick your finger. It may sound like nit-picking, but the theory changes something rather impossible in the brain to an idea that could work based on what we know about brain computations.

As for the problem of neural branches supporting both feedforward “computing” signals and feedback “adjustment” signals, the authors argue that recent findings in neuroscience clearly show that the neurons aren’t a uniform blob when it comes to computation. Rather, neurons are clearly divided into segments, with each compartment receiving different inputs and computing in slightly different ways. This means it’s not crazy to hypothesize that neurons can simultaneously support and integrate multiple types of signals—including error signals—while maintaining their memory and computational prowess.

That’s the simple distillation. Many more details are explained in the paper, which makes a good read. For now, the idea of backprop-like signals in the brain remains a conjecture; neuroscientists will have to carry out wet lab experiments to see if empirical data supports the idea. If the theory actually plays out in the brain, however, it’s another layer—perhaps an extremely fundamental one—that links biological learning with AI. It would be a level of convergence previously unimaginable.

Image Credit: Gerd Altmann from Pixabay

Shelly Fan
Shelly Fanhttps://neurofantastic.com/
Dr. Shelly Xuelai Fan is a neuroscientist-turned-science-writer. She's fascinated with research about the brain, AI, longevity, biotech, and especially their intersection. As a digital nomad, she enjoys exploring new cultures, local foods, and the great outdoors.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured