Moore’s Law Is Dying. This Brain-Inspired Analogue Chip Is a Glimpse of What’s Next

“Dark silicon” sounds like a magical artifact out of a fantasy novel. In reality, it’s one branch of a three-headed beast that foretells the end of advances in computation.

Ok—that might be too dramatic. But the looming problems in silicon-based computer chips are very real. Although computational power has exploded exponentially in the past five decades, we’ve begun hitting some intractable limits in further growth, both in terms of physics and economics.

Moore’s Law is dying. And chipmakers around the globe are asking, now what?

One idea is to bet on quantum computers, which tap into the ultra-weird world of quantum mechanics. Rather than operating on binaries of 0s and 1s, qubits can simultaneously represent both states, with each having a different probability and thus much higher information density.

Another idea is to look inside our heads: the quantum realm isn’t the only way to get past binary computation. Our brains also operate on probabilities, making them a tangible source of inspiration to overhaul the entire computational world.

This week, a team from Pennsylvania State University designed a 2D device that operates like neurons. Rather than processing yes or no, the “Gaussian synapse” thrives on probabilities. Similar to the brain, the analogue chip is far more energy-efficient and produces less heat than current silicon chips, making it an ideal candidate for scaling up systems.

In a proof-of-concept test, the team used a simulated chip to analyze EEG (electroencephalography) signals taken from either wakeful or sleeping people. Without extensive training, the chip was able to determine if the subject was sleeping.

“Combined, these new developments can facilitate exascale computing and ultimately benefit scientific discovery, national security, energy security, economic security, infrastructure development, and advanced healthcare programs,” the team concluded.

The Three-Headed Beast

With new iPhones every year and increasingly sophisticated processors, it certainly doesn’t feel like we’re pushing the limits of silicon-based computing. But according to lead study author Dr. Saptarshi Das, the ability to further scale traditional computation is dying in three different aspects: energy, size, and complexity.

Energy scaling helps ensure a practically constant computational power budget, explained Das. But it came to an end around 2005 because of hard limits in the silicon chip’s thermodynamic properties—something scientists dub the Boltzmann tyranny (gotta love these names!). Size scaling, which packs more transistors onto the same chip area, soon followed suit, ending in 2017 because quantum mechanics imposes limitations at the materials level of traditional chips.

The third, complexity scaling, is still hanging on but on the decline. Fundamentally, explained the team, this is because of the traditional von Neumann architecture that most modern computers use, which rely on digital, binary computation. In addition, current computers store logic and memory units separately and have to operate sequentially, which increases delay and energy consumption. As more transistors are jam-packed onto the same chip and multiple cores are linked together into processors, eventually the energy needs and cooling requirements will hit a wall.

This is the Dark Silicon era. Because too much heat is given out, a large amount of transistors on a single chip can’t be powered up at once without causing heat damage. This limitation requires a portion of computing components on a chip to be kept powered off—kept “dark”—at any instant, which severely limits computational power. Tinkering with variables such as how to link up transistors may optimize efficacy, but ultimately it’s a band-aid, not a cure.

In contrast, the brain deploys “billions of information processing units, neurons, which are connected via trillions of synapses in order to accomplish massively parallel, synchronous, coherent, and concurrent computation,” the team said. That’s our roadmap ahead.

Saved by the Bell

Although there are plenty of neuromorphic chips—devices that mimic the structure or functionality of neurons and synapses—the team took a slightly different approach. They focused on recreating a type of artificial neural network called a probabilistic neural network (PNN) in hardware form.

PNNs have been around since the 60s as software, and they’re often used for classification problems. The mathematical heart of PNNs differs from most of the deep learning models used today, but the structure is relatively similar. A PNN generally has four layers, and raw data travels from the first layer to the last. The two middle layers, pattern and summation, process the data in a way that allows the last layer to make a vote—it selects the “best” answer from a group of potential probable ones.

To implement PNNs directly in hardware form, the team engineered a Gaussian synapse made of two different materials: MoS2 and black phosphorus. Each represents a transistor, and is linked in series on a single synapse. The way the two transistors “talk” to each other isn’t linear. When the MoS2 component switches on, the electrical current rises exponentially until it reaches a max level, then it drops. The connection strength is like a bell-shaped curve—or in mathematical lingo, a Gaussian distribution widely used in probabilities (and where the device gets its name).

How each component turns on or off can be tweaked, which in turn controls communication between the transistors. This, in turn, mimics the inner workings of PNNs, said study author Amritanand Sebastian.

Decoding Biology With Artificial Synapses

As a proof of concept, the team decided to give back to neuroscience. The brain generates electrical waves that can be picked up by electrodes on top of the scalp. Brain waves are terribly complicated data to process, said the team, and artificial neural networks running on traditional computers generally have a hard time sorting through them.

The team fed their Gaussian synapse recordings from 10 whole nights from 10 subjects, with 32 channels for each individual. The PNN rapidly recognized different brainwave components, and were especially good at picking out the frequencies commonly seen in sleep.

“We don’t need as extensive a training period or base of information for a probabilistic neural network as we need for an artificial neural network,” said Das.

Thanks to quirks in the transistors’ materials, the chip had some enviable properties. For one, it was exceedingly low-power. To analyze 8 hours of EEG data, it consumed up to only 350 microwatts; to put this into perspective, the human brain generally runs on about 20 watts. This means that the Gaussian synapse “facilitates energy scaling,” explained Sebastian.

For another, the materials allow size scaling without losing their inherent electrical properties. Finally, the use of PNNs also solves the complexity scaling problem, because it can process non-linear decisions using fewer components than traditional artificial neural networks.

It doesn’t mean that we’ve slayed the three-headed beast, at least not yet. But looking ahead, the team believes their results can further inspire more ultra-low power devices to tackle the future of computation.

“Our experimental demonstration of Gaussian synapses uses only two transistors, which significantly improves the area and energy efficiency at the device level and provides cascading benefits at the circuit, architecture, and system levels. This will stimulate the much-needed interest in the hardware implementation of PNNs for a wide range of pattern classification problems,” the authors concluded.

Image Credit: Photo by Umberto on Unsplash

Shelly Fan
Shelly Fanhttps://neurofantastic.com/
Dr. Shelly Xuelai Fan is a neuroscientist-turned-science-writer. She's fascinated with research about the brain, AI, longevity, biotech, and especially their intersection. As a digital nomad, she enjoys exploring new cultures, local foods, and the great outdoors.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured