Deep Learning Networks Can’t Generalize—But They’re Learning From the Brain

“Bias” in AI is often treated as a dirty word. But to Dr. Andreas Tolias at the Baylor College of Medicine in Houston, Texas, bias may also be the solution to smarter, more human-like AI.

I’m not talking about societal biases—racial or gender, for example—that are passed onto our machine creations. Rather, it’s a type of “beneficial” bias present in the structure of a neural network and how it learns. Similar to genetic rules that help initialize our brains well before birth, “inductive bias” may help narrow down the infinite ways artificial minds develop; for example, guiding them down a “developmental” path that eventually makes them more flexible.

It’s not an intuitive idea. Unconstrained by evolution, AI has the potential to churn through vast amounts of data to surpass our puny, fatty central processors. Yet even as a single algorithm beats humans in a specific problem—chess, Go, Dota, medical diagnosis for breast cancer—humans kick their butts every time when it comes to a brand new task. Somehow, the innate structure of our brains, when combined with a little worldly experience, lets us easily generalize one solution to the next. State-of-the-art deep learning networks can’t.

In a new paper published in Neuron, Tolias and colleagues in Germany argue that more data or more layers in artificial neural networks isn’t the answer. Rather, the key is to introduce inductive biases—somewhat analogous to an evolutionary drive—that nudge algorithms towards the ability to generalize across drastic changes.

“The next generation of intelligent algorithms will not be achieved by following the current strategy of making networks larger or deeper. Perhaps counter-intuitively, it might be the exact opposite…we need to add more bias to the class of [AI] models,” the authors said.

What better source of inspiration than our own brains?

Moravec’s Paradox

The problem of AI being a one-trick pony dates back decades, long before deep learning took the world by storm. Even as Deep Blue trounced Kasparov in their legendary man versus machine chess match, AI researchers knew they were in trouble. “It’s comparatively easy to make computers exhibit adult-level performance on intelligence tests or playing checkers, and difficult or impossible to give them the skills of a one-year-old when it comes to perception and mobility,” said computer scientist Hans Moravec, who famously articulated the idea that bears his name along with Marvin Minsky and others.

Simply put: AIs can’t easily translate their learning from one situation to another, even when they’ve mastered a single task. The paradox held true for hand-coded algorithms; but it remains an untamed beast even with the meteoric rise of machine learning. Splattering an image with noise, for example, easily trips up a well-trained recognition algorithm. Humans, in contrast, can subconsciously recognize a partially occluded face at a different angle and under different lighting, or snow and rain.

The core problem, explained the authors, is that whatever features the deep network is extracting don’t capture the whole scene or context. Without a robust “understanding” of what it’s looking at, an AI falters even with the slightest perturbation. They just don’t seem to be able to integrate pixels across long ranges to distill relationships between them—like piecing together different parts of the same object. Our brains process a human face as a whole face, rather than eyes, nose, and other individual components; AI “sees” a face through statistical correlations between pixels.

It’s not to say that algorithms don’t show any sort of transfer in their learned skills. Pre-training deep networks on ImageNet, a giant depository of images of our natural world, to recognize one object can be “surprisingly” beneficial for all kinds of other tasks.

Yet this flexibility alone isn’t a silver bullet, the authors argue. Without a slew of correct assumptions to guide the deep neural nets in their learning, “generalization is impossible.”

Constraint Is Key

The solution isn’t making the artificial networks deeper or feeding them more data. Rather, AI and the human brain seem to use vastly different solution strategies when processing an image, and data alone can’t push an AI towards the robustness and generalization prowess of a biological mind.

The key, said the team, is to add bias to deep learning systems, “pushing” them into a learning style that better mimics biological neural networks. To roughly break it down: deep networks are made up of different layers of “neurons” connected to each other by different strengths (or “weights”). The networks come in all shapes and sizes, appropriately called architectures.

Here’s the crux: for each type of architecture, a deep network can have drastically different connection weights among its neurons. Even though they “look” identical, they process input data differently because of their personalized neural connections. Think of it as identical siblings that don’t always react similarly under the same circumstances.

Inductive bias works its magic by picking the right “sibling,” so that it has a higher chance of being able to generalize by the end of training. Bias isn’t a universal magic dust that works for all problems: each problem requires its own specific mathematical formulation.

The Biased Brain

Confused? I picture inductive bias as some extra math or training examples that amp up or reorganize algorithms—nothing mysterious here. What’s more interesting is the source of its inspiration.

Neuroscience is the perfect influencer of bias in AI, the authors said. For one, our brains continuously reuse the same neural network for different experiences, which critically relies on the ability to generalize and not forget previous learnings. Here, the perceptual and cognitive abilities of biological brains can inspire AI training regimes—such as how to design multiple tasks for a single algorithm—to help “draw out” models with better generalization abilities.

For example, humans tend to process shapes better than textures in an image, whereas some deep learning networks perform the opposite way. Adding in data that biases AI towards prioritizing shapes—mimicking human cognition—allowed it to perform better on images filled with noise. This is because shapes hold up to noise better than textures.

For another, electrical data captured from biological neural networks provide a window into the inner workings of our minds. We already have networks that can generalize within us; brain recordings can start parsing how they work, layer by layer.

If we can constrain an artificial network to match the neural responses of biological ones, the authors said, we can “bias the network” to extract findings from the data in a way that “facilitates brain-like generalization.” Scientists are still trying to figure out which brain processing properties to transfer over to AI to achieve the best results.

By training algorithms on multiple tasks and using neural recordings to guide their learning, AI will become more brain-like. This doesn’t mean that a network will perform better on a single task compared to classic deep learning regimes; rather, the algorithm will be able to generalize better beyond the statistical patterns of the training examples, the authors said.

A More “Natural” AI

Inductive bias is just the latest example of how neuroscience is inspiring next-generation AI. The authors also touched on potential benefits of copying physical structure from brains to algorithms, but argued that functionality is likely more important.

The brain still has a lot more to offer. For example, even a tiny chunk of a mouse’s brain has hundreds of different types of neurons with different properties, whereas AI networks generally incorporate two or three. The brain’s cellular zoo always includes non-neurons that contribute to neural processing, and far denser connections and more sophisticated connective patterns than AI networks. Unlike biological brains, most deep networks don’t have connections that massively link neurons within the same layer, or recurrent connections that allow feedback signals to jump back several layers.

Tolias is hoping to use neuro-inspired components, such as inductive bias, to build AI that can generalize learnings from a single example. As a member of IARPA’s ambitious MICrONS (Machine Intelligence from Cortical Networks) project, he’s helping map all the neural activity inside a one-millimeter cube of a mouse’s cortex, in hopes of extracting more insights to guide the development of intelligent machines.

“Careful analysis from computational neuroscience and machine learning should continually expose the differences between biological and AI through new benchmarks, allowing us to refine the models… [the two fields] together can help build the next generation of AI,” the authors concluded.

Image Credit: Shutterstock.com

Shelly Fan
Shelly Fanhttps://neurofantastic.com/
Dr. Shelly Xuelai Fan is a neuroscientist-turned-science-writer. She's fascinated with research about the brain, AI, longevity, biotech, and especially their intersection. As a digital nomad, she enjoys exploring new cultures, local foods, and the great outdoors.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured