New Brain-Like Chip Uses Light to Go Blazingly Fast

Deep learning is having a serious moment right now in the world of AI.

And for good reason. Loosely based on the brain’s computing architecture, artificial neural networks have vastly outperformed their predecessors in a variety of tasks that had previously stumped our silicon-minded comrades.

But as these algorithms continuously forge new grounds in machine intelligence, we’re coming to an uncomfortable realization: transistor-based computers have hard limits, and those limits are approaching rapidly.

Now, thanks to a new system developed by Princeton engineers, we may have one way to smash the speed barrier of our current processors: neuromorphic computing running on photons, not electrons, with silicon chips that work at the speed of light.

Published this week on Arxiv, the new photonic neural network is so blazingly efficient that when pitted against a conventional CPU in solving differential equations, it performed roughly 2,000 times faster.

And that’s just the beginning. According to one estimate, switching to all-light computing could eventually make the process millions of times faster.

“Photonic neural networks leveraging silicon photonic platforms could access new regimes of ultrafast information processing for radio, control, and scientific computing,” wrote the authors in their paper.

There’s a lot here to unpack, so let’s start with how to make computers think like the brain.

Thinking in Silicon

Loosely based on the brain’s computational architecture, neuromorphic, or “brain-like” chips process data using electronic elements that mimic the brain’s neurons and synapses.

In essence, they’re hardware-based artificial neural networks: when exposed to data, these chips learn by adjusting the strength of the connections, or “weights”, between neurons, much like their biological counterparts.

These babies are extremely powerful. Software versions of deep neural nets running on conventional CPUs are already the workhorses driving most AI applications in major tech companies. Automatic facial recognition, language translation, multi-step reasoning and planning — skills once showcased by only humans — have all been made possible by these brain-inspired artificial circuits.

The problem is, conventional computers aren’t suited for the type of low power, massive parallel computations that the brain excels at. To really get computers that think like the brain, we need to overhaul our current computing architectures.

Here’s where neuromorphic chips come in. In essence, these chips are artificial neural networks in physical form. Rather than running AI algorithms, they directly implement everything inside their hardware.

The chips usually pack multiple neuromorphic computing cores. Like biological neurons, each core takes in inputs from multiple sources and integrates the information. When the sum of the input reaches a certain threshold, the core “spikes” — that is, they produce an output signal.

Why is this important? Unlike today’s computers that have a separate memory and processing unit, the cores on these brain-like chips tightly integrate the two units. This means the chips don’t need to waste energy and bandwidth to move data to-and-from memory, thus dramatically reducing power consumption.

Because the cores form a network and operate in parallel, compared to standard CPUs that process information sequentially, this “spider web” organization makes the chips a lot faster.

All that said, these chips are still constrained by the very thing that powers them — electricity.

Because current neuromorphic chips run on electrons, they’re enslaved to electronic clock rates and ohmic loss — the unavoidable loss of power as heat as electrons course through wire cables.

It’s hard to beat the natural laws set forward by our physical world. So why not embrace them? In their new silicon-based system, the Princeton researchers turned to literally the fastest medium we can use to transmit information — light.

Optical computing

Using light to transmit information may sound exotic, but we’re actually surrounded by these systems: just think optical fibers for internet, phone signals and cable TV. Compared to electrons, photons have significantly more bandwidth, process data more quickly and are less prone to electromagnetic interference, or noise. Oh, and they also move a lot faster.

Perhaps unsurprisingly, IBM was instrumental in designing the first light-bearing silicon chips. About 10 years ago, they carved silicon chips with tiny tunnels called waveguides to usher photons along a set path. Each waveguide has a transmission bandwidth of over 1 terahertz — for reference, the bandwidth of coaxial cables used for cable television is just around 500 megahertz.

The Princeton team started with such a nanophotonic chip, and configured it to resemble an artificial neural network made up of multiple neuron nodes.

The input to each node is a mixture of different wavelengths of light. For each neuron, the spectra of wavelength combinations are slightly different (for example, blue, green, red versus some other mix of colors).

The node detects the total intensity of the spectra that it receives (blue+green+red), and if the power reaches a certain threshold, the node modulates the output intensity of a laser linked to it. Crucially, the laser only emits a single wavelength of light (for example, red).

In this way, each node acts like a neuron: it receives its selection of input signals, and only fires when it reaches a threshold. The output of each neuron is essentially color-coded by the wavelength (in our example, red) and sent into the network.

These outputs are then summed up into a single waveguide through a process called wave division multiplexing. This mix of outputs is the final result of that iteration.

So how does the network learn?

It has to do with feedback. For every iteration the output waveguide is routed back to the individual nodes. Each node is armed with a spectral filter to modulate the wavelengths and strength of light inputs — this is why the input to each node is slightly different. The filtering process “tunes” the neuron towards a more accurate result, much like tuning a guitar. When trained with millions of examples, the optical network inches closer and closer to the correct output.

In other words, the chip learns in the same way that a neuromorphic neural network does. In fact, the authors pointed out that the silicon nanophotonic chip behaves mathematically very similar to a type of algorithm called continuous-time recurrent neural networks (CTRNNs), which are currently used in the field of evolutionary robotics and vision.

“This result suggests that programming tools for CTRNNs could be applied to larger silicon photonic neural networks,” the authors wrote.

Edging closer

As a proof-of-concept to demonstrate the speediness of their photonic chip, the authors hooked up 49 optical neurons and pitted them against a conventional computer in solving differential equations.

It’s a toy problem, but it really showcased just how fast the optical chip can run. “The effective hardware acceleration factor of the photonic neural network is therefore estimated to be 1,960 × in this task,” concluded the authors.

These results are no doubt promising. But before optical chips enter the mainstream, they still need to be assessed for their scalability, costs and applications towards more general-purpose computing.

Even so, we may be witnessing the dawn of a new generation of computing devices completely unlike any machine that we’ve ever encountered before.

These are just the “first forays into a broader class of silicon photonic systems,” the authors say.


Image Credit: Shutterstock

Shelly Fan
Shelly Fanhttps://neurofantastic.com/
Dr. Shelly Xuelai Fan is a neuroscientist-turned-science-writer. She's fascinated with research about the brain, AI, longevity, biotech, and especially their intersection. As a digital nomad, she enjoys exploring new cultures, local foods, and the great outdoors.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured