Scientists Discovered ‘Mini-Computers’ in Human Neurons—and That’s Great News for AI

With just their input cables, human neurons can perform difficult logic calculations previously only seen in entire neural networks. To restate: human neurons are far more powerful devices than originally thought. And if deep learning algorithms—the AI method loosely based on the brain that’s taken our world by storm—take note, they can be too.

Those are unconventional, fighting words.

For 70 years, neurons were considered the basic computational unit of the brain. Yet according to a new study published this month in Science, the neurons in our cortex, the outermost “crust” of our brain, seem to have uniquely evolved to sustain incredibly complex computations in their input cables. It’s as if someone finally obtained proof that your computer’s electrical wiring is actually made up of mini-processors, each performing calculations before sending results to a CPU.

It’s weird. It’s controversial. But it has also just been seen for the first time in human neurons.

As the authors conclude: we long assumed that a neuron could only operate logical functions such as AND and OR, whereas more complex computations required entire networks. We find that activity in a neuron’s input cables can support complex logical operations using completely different rules than a single neuron.

So why should we care? Fundamentally, it has to do with intelligence—why we stand out among the animal kingdom, and how we can potentially replicate that intelligence with AI.

Like the Earth’s crust, the cortex is also made up of multiple layers, with distinctive wiring patterns that link up neurons within layers and among different ones. Neuroscientists have long thought that our enormously intricate cortex contributes to our intellectual capabilities—in fact, deep learning was inspired by computations embedded within cortical neurons.

But the new results, recorded from surgically-removed brain chunks from patients with brain tumors and epilepsy, suggest that current deep learning methods are only scratching the surface of replicating our brain’s computations. If AI systems can incorporate these newly discovered algorithms, they could potentially become far more powerful.

Meet the All-or-None Neuron

A textbook neuron looks like a leafless tree: massive roots, called dendrites, lead to a sturdy, bulbous base—the body. Like water and nutrients, incoming electrical signals shoot up dendritic roots into the body, where a hump-like structure synthesizes all the information. If the stimulation is sufficiently strong, it gets passed down a singular tree trunk—the output cable called an axon—then transmitted to another neuron by way of bubbles filled with chemical messengers or with electricity. If the input signals are too weak, the neuron kills the data. It’s why neuroscientists often call single neurons “binary” or “digital”: they either fire or don’t.

Simple, no?

Well…not quite. For decades, a question nagged at the back of neuroscientists’ minds: why are dendritic trees, compared to a single lonely axon, so much more intricate?

By recording from single neurons in rodent brains, scientists recently began figuring out that dendritic trees aren’t just simple passive cables. Rather, they’re extremely active components underlying a hidden layer of neural computation. Some dendritic trees, for example, can generate electrical spikes five times larger and more frequently than classic neuronal firing. Just in rats, the discovery of active dendrites mean that the brain could have 100 times more processing capacity than previously thought.

The new study asks: does the same hold true for humans?

Human Dendrites Are Special

Compared to rodent brains, the multi-layered human cortex is much thicker and denser. Layers 2 and 3 (L2/3) especially stand out for their elaborate and densely-packed dendritic forests. Compared to other species—or even the rest of the human brain—these layers contain a disproportionate amount of neuronal matter. The root cause of this strange thickening lies in our genes, which encode a brain development program to guide the characteristic. Some even believe that it’s fundamental to what makes us human.

If dendrite “inputs” help shape our neurons’ computation—and our intelligence—then L2/3 is where we should be able to observe them, the authors reasoned.

Measuring electrical activity from dendrites, each 100 times smaller than the diameter of a human hair, is much easier said than done. It’s partly why these enormously powerful calculations have been hard to capture using electrodes even in animals—the process is similar to gently sucking on an ant’s back with a Roman column-sized straw without hurting the ant.

Rather than recording from a living, intact human brain, the team opted to look at fresh slices of the cortex removed due to epilepsy or tumors. It’s a smart strategy: slices are much easier to examine using traditional neuroscience methods—for example, something called a “patch clamp” that records directly from neuronal components. Slices can also be examined under the microscope using fluorescent dyes that glow during activity. Using brain tissue from two different types of patients can then help weed out signals unique to each brain disease to get to the root of human dendritic computations.

A bizarre signal immediately emerged. Human dendrites sparked with activity, but the electrical spikes quickly dissipated as they traveled towards the cell body. In contrast, a standard neural signal doesn’t taper down as it gallops along the output cable towards its next destination. Even weirder, the dendritic signals relied strictly on calcium ions to generate their electricity, which massively differs from classic neural signaling.

It’s like suddenly discovering a new species that consumes carbon dioxide, rather than oxygen, to sustain its activity—except that species is part of you. These signals, dubbed “dCaAPs,” have never been observed in cortical cells from any mammals previously, the authors said.

“There was a ‘eureka’ moment when we saw the dendritic action potentials for the first time,” said study co-author Dr. Matthew Larkum at Humboldt University of Berlin. “The experiments were very challenging, so to push the questions past just repeating what has been done in rodents already was very satisfying.”

But it gets weirder. Unlike a neuron’s all-or-none firing, human dendrites seem to go analogue. That is, their response is “graded,” but in an unintuitive way: the stronger their stimuli, the lower their response. This is in stark contrast to other neuronal computations, where stronger input, even from multiple sources, usually leads to stronger output. And while these dendritic spikes aren’t loners per se—a few dCaAPs helped change the firing of its neuron—many of the dendrite’s electrical activity seemed to do their own thing.

Forest in the Trees

Cataloging the secret lives of human dendrites is already interesting, but the authors went a step further to ask what it all means.

Using computational modeling, they recreated dCaAPs’ unique firing pattern and challenged it to solve a logic function called XOR. It compares two inputs, and if the bits are the same, the result is 0. If they’re different, it results in 1. Unlike the simpler AND and OR functions, XOR normally requires an entire neural network to perform.

However, human dendrites’ strange behavior, where one input only leads to one output, allowed them to “effectively compute the XOR operation,” the authors said. When stacked together with a neuron’s normal AND and OR functions, it’s then possible to condense entire network functions into that of a single neuron. However, for now the idea remains theoretical—the authors weren’t able to model an entire neuron along with dendritic computations.

But keep your eye out for updates. The results, if validated in intact human brains, hold enormous possibilities for improving deep learning algorithms. For now, deep learning uses individual artificial “neurons” that link into multi-layered networks—similar to our previous understanding of human brains. Adding dendritic computations could in theory massively expand deep learning capabilities. In a way, AI is now neuroscience’s theoretical playground, a collaboration made in heaven.

Regardless, the results peel back another onion layer towards understanding and replicating our intelligence. “Dendrites make up 95 percent of the surface area of pyramidal cells in the cortex, but have remained ‘unexplored territory’ in the human brain,” said Dr. Michael Häusser at University College London, who was not involved in the study. By hunting for similar signals in rodent brains, we may be able to determine whether “the special electrical properties of human dendrites play a key role in making human brains special,” he said.

Image Credit: Image by Gerd Altmann from Pixabay

Shelly Fan
Shelly Fanhttps://neurofantastic.com/
Shelly Xuelai Fan is a neuroscientist-turned-science writer. She completed her PhD in neuroscience at the University of British Columbia, where she developed novel treatments for neurodegeneration. While studying biological brains, she became fascinated with AI and all things biotech. Following graduation, she moved to UCSF to study blood-based factors that rejuvenate aged brains. She is the co-founder of Vantastic Media, a media venture that explores science stories through text and video, and runs the award-winning blog NeuroFantastic.com. Her first book, "Will AI Replace Us?" (Thames & Hudson) was published in 2019.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured