Incepting Sight? This Brain Implant Lets Blind Patients “See” Letters

For most of us, “eyes” are synonymous with “sight”: whatever our eyes capture, we perceive.

Yet under the hood, eyes are only the first step in an informational relay that transmutes photons into understanding. Light-sensitive cells in the eyes capture our world in exquisite detail, converting photon signals into electrical ones. As these electrical pulses travel along the optic nerve into the visual cortex, the signals are transformed into increasingly complex percepts—from “seeing” lines to shapes to parts of an object to a full scene.

In a sense, our eyes are sophisticated cameras; the brain’s visual cortex runs the software that tells us what we’re seeing. Damage the cortex, and a person no longer thinks he “sees” the world, even with perfectly functioning eyes.

What about the reverse? If you directly program a scene into the visual cortex by electrically stimulating its neurons, are our biological cameras even necessary?

In a preliminary study presented at the Society for Neuroscience conference earlier this month, a team developed a visual prosthetic that does just that. Here, the team used an implanted array of electrodes in the visual cortex to directly input visual information into the brain—bypassing eyes that have been damaged by age or disease.

By systematically “drawing” letter-like shapes through sequentially activating the electrodes, the team showed that blind patients could discern simple shapes without any prior training.

“Vision as humans is our most important sense. It takes up a quarter of our brain…and loss of sight is devastation,” said study author Dr. Michael S. Beauchamp at Baylor College of Medicine.

The dream is to build a visual cortical prosthesis to transmit images from a camera directly into the brain, he said.

Phosphene “Pixels”

Vision may seem mysterious, but fundamentally it’s about dedicated information highways and processors—each “pixel” in a scene is transmitted to its own set of downstream neurons in the visual cortex.

In other words, there’s a buddy relationship between each point of your visual field and the brain. The letter z for example, activates adjacent neurons in the retina, which transmit the signals to similarly neighboring neurons in the cortex. Using a technology called receptive field mapping, neuroscientists can discern which set of cortical neurons corresponds to which particular area in your field of view.

Five decades ago, a prominent neuroscientist called Dr. Wilder Penfield discovered a peculiar quirk of the visual cortex: zapping those neurons causes people to “see” a flash of light—called a phosphene—in a particular part of their field of vision.

Stimulating multiple areas of the visual cortex causes multiple phosphenes. Because each area of the cortex corresponds to a different area in the visual field, scientists thought that zapping shapes by simultaneously activating neurons in the visual cortex could potentially cause a person to “perceive” that same shape.

“The idea is very simple: a patient would have a camera mounted on a pair of eyeglasses, and the signal from the camera could be wirelessly transmitted into the visual cortex via a grid of electrodes,” said Beauchamp.

There’s just one problem: it doesn’t work.

From Static to Dynamic

The issue is trying to activate too many neurons at once, resulting in cacophony, explained Beauchamp.

An analogy is tracing letters on your hand. For example, if someone simultaneously touches multiple points on your palm that collectively make up the letter z, it’s close to impossible to decipher the letter based on touch alone.

This is what previous generations of visual cortical prosthetics tried to do, and patients just see amorphous light blobs, noted Beauchamp.

In contrast, dynamically drawing the same shape in a trajectory that matches “z” makes it easy to figure out the letter.

“What we’ve done is essentially the same idea…instead of tracing the letter on a patient’s palm, we traced it directly on their brain using electrical currents,” said Beauchamp.

To verify their method, the team first worked with a sighted patient dubbed YBN who could provide feedback. The epilepsy patient already had electrodes implanted into the brain as a method for doctors to track down the source of the seizures, and the team inserted a 24-electrode microarray into YBN’s visual cortex in the back of the head.

The team first figured out the buddy relationship between each cortex region implanted with an electrode and their corresponding points in the field of view. They then dynamically traced four different letters using the electrodes, and asked the patient to trace out the pattern of phosphenes on a touchscreen.

“There was a striking correspondence between the predicted and actual letter shape percepts,” the team said.

When they challenged YBN to pick out one letter out of four possibilities based on the perceived phosphene pattern, the patient succeeded in 15 out of 23 trials—without any previous training.

“This is different than any other paradigms like this, which require hundreds or thousands of training trials,” said Beauchamp.

It wasn’t just YBN—the team saw similar success rates in two other sighted epilepsy patients.

From Sighted to Blind

Encouraged by their success, the team then validated the technique in a blind patient. As before, the patient had an electrode array implanted in her visual cortex. The signals were transmitted wirelessly through a transmitter mounted on a baseball cap she wore.

They first zapped each electrode individually to figure out where the patient “saw” a corresponding phosphene: for example, upper left, middle of the visual field, or lower right. This created a phosphene map that allowed the team to design seven different stimulation trajectories that produces letter-like shapes. They then sequentially activated each electrode involved in a trajectory.

“She was able to distinguish all of the different patterns and draw them on the screen,” said Beauchamp. When asked to pick out a shape from a group of five, the patient succeeded a striking 14 out of 15 tries.

Although the team doesn’t yet fully understand why their tracing technique works so well, they have a guess. “[It] taps into cortical mechanisms for processing visual motion,” which means it could leverage the same higher-level cortical areas that normally string stop-motion images into smooth videos, they explained. In this way, dynamic activation results in a more natural kind of cortical response.

“We’re very excited about this technique because we think it has great potential to improve [visual cortical prostheses],” said Beauchamp.

Advances in how we write into the human cortex are constantly evolving, from high-density electrode arrays to non-invasive technologies such as optogenetics or focused ultrasound. Because the method is purely based on software, it could be adapted for whatever hardware is already implanted into the brain to restore visual function to the blind.

“If you look at the stars in the sky, you might [just] see a bunch of stars. But if someone traces out the constellations for you, it can be a lot easier to understand the form. And that’s what we’re trying to do for blind patients,” said Beauchamp.

Image Credit: Adrian Niederhaeuser / Shutterstock.com

Shelly Fan
Shelly Fanhttps://neurofantastic.com/
Dr. Shelly Xuelai Fan is a neuroscientist-turned-science-writer. She's fascinated with research about the brain, AI, longevity, biotech, and especially their intersection. As a digital nomad, she enjoys exploring new cultures, local foods, and the great outdoors.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured