Last week, Elon Musk’s mysterious Neuralink finally revealed their master plan after two years of silence: to build high bandwidth, immune resistant, thread-like brain-machine interfaces that can be robotically implanted into the brain.

In theory, the implant could allow computers to replace faulty circuits or augment healthy ones. An even more ambitious future, if technologically possible, is to give the brain direct access to the massive database that is the internet—potentially “downloading” information, or even experiences, into neural circuits with tailored and targeted electrical zaps.

Sound fanciful? Not to burst your bubble, but yes, it is. For now, those ideas venture dangerously into sci-fi territory. Yet DARPA has been experimenting with offloading human memories onto silicon chips, to deliver them back into the brain to boost function when injured.

Even wilder, scientists have already written artificial memories, emotions, and sensations into mouse brains—in some cases, without the animal ever actually seeing, smelling, or sensing the thing that they’re “experiencing.”

To be very clear: the goal of artificial memory research isn’t to tamper with anyone’s mind. Rather, it’s an enormously powerful way to understand the neuronal ensembles and circuits that underlie basic brain computations and behaviors.

Here’s the current inception playbook. One strategy is at the neural circuit level; the other directly goes for the brain’s basic computational unit—individual neurons. Neither is perfect, but they do illustrate what’s already scientifically possible.

Shining Light on Fear and Love

Our loves and hates are fundamental to our personalities. Thanks to decades of research deciphering those neural circuits, it’s now possible to artificially program those drives into the mouse brain.

The first attempt was in 2013, from Nobel Prize winner Dr. Susumu Tonegawa’s lab at MIT. Using optogenetics, a revolutionary technology that lets scientists genetically engineer neurons in mice with light, the team copy-pasted a fear memory from one situation to another. Here, the mice were briefly shocked with an electrical zap in one room, which allowed the tech to “label” neurons involved in that fear response with a light-sensitive tag. With implanted optical fibers, the team then artificially activated the labeled neurons while the mice were in a comfortable space. This “copied” the sense of fear over to the neutral space, causing them to freeze in terror—even though they have no real-life reason to be afraid.

In April 2019, a team from Toronto took a step further, encoding an emotional response to a neutral smell from scratch. Smell has a relatively simple neural circuit: generally, one type of scented molecule will trigger a dedicated set of receptors in your nose. The receptors translate chemical information into electrical ones, and send those pulses to another dedicated group of “smell”-processing cells in the brain. Thanks to this one-to-one relationship, the team quickly figured out which cells were responsible for carrying information encoding acetophenone, a perfumery molecule that apparently smells delightful.

Using transgenic mice with light-sensitive neurons, the team was able to artificially activate acetophenone-coding neurons. (Did the mice “smell” anything? We don’t know.) Here’s the crazy part: at the same time, they also used light to activate one of two motivational pathways. One, the “feel-good” fibers, carry a sense of pleasure from food, sex, alcohol—anything people find delightful. The other set, the “comedown” fibers, carries feelings of discomfort and aversion.

In this way, the team basically stitched two neural circuits together to program a memory directly into the brain. Pairing acetophenone-smelling circuits with the feel-good ones drove the mice to early pursue that scent. When paired with the comedown circuit, however, they immediately kept their distance. Remember: the mice had never previously even smelled acetophenone before—their emotional response was artificially written into the brain.

Lighting Up Visual Hallucinations

The Toronto study showed it’s possible to stitch neural circuits together with light to form new memories. Some scientists want even finer-grained control: rather than targeting neural networks, why not go straight for individual neurons?

Dr. Karl Deisseroth at Stanford University, one of the original creators of optogenetics, has been readily improving a toolbox of methods that control groups of single neurons with light. Earlier this year, his team used that toolbox to tease apart intertwined neurons controlling social interaction or eating. With light pulses that precisely zap groups of individual neurons, the team was able to drive an animal’s preference for either behavior.

It was the first time that scientists could control behavior by targeting a collection of neurons individually, rather than entire circuits. That said, the study didn’t really write memories into cells; it’s more like eavesdropping on neural conversations and hijacking them with light.

Last week, Deisseroth went a step further. Using an improved light-sensing protein and a hologram-based technique to target large groups of single cells, the team asked if it was possible to make a mouse hallucinate—getting us a little closer to “writing” a fake experience.

They first showed mice images of either horizontal or vertical bars while monitoring their neural activity, and trained the mice to lick a water spout only when they saw one particular orientation. This allowed the team to figure out what angle a particular neuron likes to respond to—what they’re “tuned” to.

They then tried to recreate those neural responses using optogenetics. They showed mice increasingly fainter versions of the images until the animals could no longer discern between them. By zapping either set of visual neurons—horizontally- or vertically-tuned—with hologram-guided light pulses, they were able to improve the mice’s perception.

Then, in a fully dark room, with no visual input whatsoever, the team recreated the mice’s perception of either image. Activating just 20 cells got the mice to correctly lick the spout to the image they were initially trained on.

“Not only is the animal doing the same thing [under artificial activation], but the brain is, too,” said Deisseroth. “So we know we’re either recreating the natural perception or creating something a whole lot like it.”

The team took advantage of the brain’s penchant to recruit neurons that function similarly to the stimulated ones, automatically linking them up into neural networks. This means that to program an artificial sensation, scientists don’t have to precisely stimulate thousands of individual neurons—just a handful is enough, and the brain will do the rest.

However, the authors were quick to clarify that they didn’t write a visual perception purely from scratch. The mice needed some training for the brain to functionally link up light-activated neurons with those actually responsible for seeing the images. Creating full-on visual hallucinations, they said, will likely also involve recreating the cascade of neuron activity that eventually led to the mice perceiving the image as horizontal or vertical.

Nevertheless, the study is pretty technically amazing. “For the first time, we’ve been able to … control multiple individually specified cells at once, and make an animal perceive something specific that in fact is not really there, and behave accordingly,” said Deisseroth.

Inception in Humans

So far, memory and sensory tinkering has only been done in rodents. Humans obviously have much bigger and more complex brains. But the studies do tell us that the brain doesn’t necessarily need outside stimulation to build its internal representation. As long as physical “experiences” are translated into electrical (or optical) currents, the brain is able to parse it as “real.”

In theory, both “inception” strategies could work in humans. The first, linking up defined neural circuits, is a relatively easier way towards programming artificial inputs than going for individual neurons. It’s the difference between stringing ready-made sentences into meaningful paragraphs versus typing individual letters—the latter option has a much larger risk of generating typos or complete gibberish.

However, with neuroscience rapidly increasingly its focus on single-cell analytics, there’s no doubt that we’ll gain tons more insight into how individual cells work collectively. Deisseroth’s study is just a start. Maybe in some far-off future, typing in experiences with brain implants isn’t that crazy of an idea.

Image Credit: Luis Leitl / Shutterstock.com

Shelly Xuelai Fan is a neuroscientist-turned-science writer. She completed her PhD in neuroscience at the University of British Columbia, where she developed novel treatments for neurodegeneration. While studying biological brains, she became fascinated with AI and all things biotech. Following graduation, she moved to UCSF to study blood-based factors that rejuvenate aged brains. She is the ...

Follow Shelly: