A Highway to Smell: How Scientists Used Light to Incept Smell in Mice

I was on a panel a few weeks ago and realized I forgot to turn off the oven. Utterly mortified, I told my Zoom attendees that I had to save my lasagna that was likely burning into a smoky crisp. One chuckled and replied “Well I’m sure it smells great—about time we get tech so we can smell it too, no?”

While we can’t yet send the smell of a fresh-cooked cheesy lasagna as data through the interwebs, a new study in Science with mice suggests that it’s not a totally impossible idea. If we can understand how the brain processes individual scents as electrical information, it might be possible to reverse-engineer the smell of bubbling lasagna and deliver it straight to your brain. Even more mind-blowing, it might be possible to make you smell things that don’t exist in nature and aren’t really there—smell-inception, so to speak.

But just to ground you in reality, for now, it’s only possible for a mouse.

The main stumbling block is that we’re still not quite sure how individual smells activate the entire neural highway from nose to brain. Scientists know that smells are “translated” into electrical activity as we sniff them into our nostrils, and are then processed through a vast number of middle men called glomeruli, nestled in the olfactory bulb, before they’re passed into the brain’s higher “smelling” regions so we consciously perceive them. But so far, the exact rules behind this process have remained mysterious.

This month, a study led by Dr. Dmitry Rinberg at New York University, in collaboration with a team in Rovereto, Italy, took a stab at cracking the olfactory code. In transgenic mice with light-activated neurons, the team used targeted light beams to activate their glomeruli in space and time, like tapping piano keys to compose a melody.

In this way, they were able to incept a wholly artificial smell into mice. What’s more, manipulating the glomeruli “piano keys”—that is, their precise activity patterns—revealed sequences that are critical for our ability to smell the world, and in turn, drive our behavior.

“Our results identify for the first time a code for how the brain converts sensory information into perception of something, in this case an odor,” said Rinberg. “This puts us closer to answering the longstanding question in our field of how the brain extracts sensory information to evoke behavior.”

The Highways to Smell

Similar to vision, smells tickle our brains based on a code. As you whiff the smell of fresh coffee or delicate wine, individual scent molecules grab onto receptors lining the inside of your nose.

These receptors are picky: they’ll only translate the chemical into an electrical signal if they’re the right fit. In this way, individual smells—no matter how complex within a soup of scents—are parsed into highways to reach glomeruli, or little bulbs of neural processors at the start of the brain’s olfactory regions. Here, the scents are further multiplexed into even more complex scents and sent to higher-level areas, allowing us to distinguish between, say, a lager and a stout.

A key challenge for understanding or replicating smell, the authors write, is figuring out the essential bits of neural data from nose to brain that induce a conscious perception. If each smell is a song, distinguishing it in space and time from others, then our glomeruli are notes that make up every single melody.

The problem is that all glomeruli aren’t created equal in the chain of neural data command. Similar to a piano melody, a single wrong note may not ruin the entire song—or in the case of glomeruli, the perception of a smell. Yet parsing which ones are more important in time and space has remained a brain wiring mystery.

Light-Incepted Smell

To untangle the web of smell, the authors took to optogenetics, the technology that allows you to activate neurons with light.

In mice with light-sensitive neurons, the team activated multiple glomeruli with a specific pattern, which generated a hallucinatory sense of a particular scent—even if it wasn’t there and didn’t exist in nature.

Of course, it’s hard to ask a mouse if it actually smelled something, so the team took a roundabout approach. They trained the mice so that the animals only licked a water spout when they “smelled” the artificial odor, generated by a light pattern and “unlikely to correspond to specific known chemical odorants” in nature, the team explained.

“I’ll be honest with you, I have no idea if it stinks [or] is pleasant” for the mouse, Rinberg said.

Yet the light pattern, and subsequent glomeruli activation, were sufficient to incept a distinctive smell into mice, so that they were able to distinguish between the incepted smell and other synthetic odors. When the scientist puppeteers slowly reduced light intensity, the mice also seemed to lose interest in licking the water spout, suggesting they no longer “smelled” the artificial scent.

Next, the team played around with the light probes, so that they hit each glomeruli in a slightly different time and spatial pattern—think changing light strobes from a disco ball, but more systematic and controlled.

We tried “hundreds of different combinations,” said Rinberg, to tease out what really matters to our brain’s olfactory perception.

To make better sense of the data, the team then combined all their data into a mathematical model of our sense of odors. The model “allows us to template match,” the team explained, which is to compare new activity sequences with learned “scent” sequences, or templates.

The experiments led to two main findings. First, tinkering with earlier receptors in glomeruli tends to more intensely disrupt odor perception. This, explained Rinberg, is known as the primacy effect in nature—that is, animals need to immediately form an assessment of friend versus foe based on initial, instantaneous judgments.

Second, based on the computational model, messing with glomeruli activation in both time and space seems to disrupt smell perception linearly. That is, the more you change the level of activation, the more it hinders perception, but it’s mostly based on the cell’s neighbors rather than the action of a sniff, for example.

That’s good news for the smell-o-vision enthusiasts. “That linearity is kind of surprising because, in neuroscience, we’re very used to a lot of nonlinear effects,” said Dr. Justus Verhagen at Yale University, who was not involved in the study. While linearity doesn’t mean the olfactory code is easily solvable, it does provide a framework for neuroscientists to further work on and potentially more easily understand.

Cracking Perception

Overall, the study shows that mammalian brains have a very rigorous way of processing smell. Even if a certain odor triggers a complex pattern of activity from the nose to glomeruli and further up the brain, it seems that the earliest input—that is, the first few activated glomeruli—are most critical for the perception of smell.

For a neural code, that’s probably as simple as it gets.

For now, the study doesn’t touch upon the neural highways inside the brain that lead to conscious perception of smell—an obvious next question. But the results suggest we could play with lower-level “scent” piano keys, and if tapped in order, can generate an artificial smell. For now, it’s just in mice. But as neural implants and non-invasive neurostimulation devices become more prevalent, it’s a step for us to one day hack perception—so we can see, smell, feel, or hear things that aren’t really there, and maybe potentially rescue these senses from injury or aging.

Image Credit: Nick Fewings on Unsplash

Shelly Fan
Shelly Fanhttps://neurofantastic.com/
Shelly Xuelai Fan is a neuroscientist-turned-science writer. She completed her PhD in neuroscience at the University of British Columbia, where she developed novel treatments for neurodegeneration. While studying biological brains, she became fascinated with AI and all things biotech. Following graduation, she moved to UCSF to study blood-based factors that rejuvenate aged brains. She is the co-founder of Vantastic Media, a media venture that explores science stories through text and video, and runs the award-winning blog NeuroFantastic.com. Her first book, "Will AI Replace Us?" (Thames & Hudson) was published in 2019.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured