Scientists Created a ‘Neural Decoder’ That Translates Brain Activity Into Speech

The idea of a mind-reading machine might freak a lot of people out, but a new device that can transform brain activity into speech could be the first step towards a lifeline for patients who have lost the use of their voice.

Finding ways to translate our thoughts into machine-readable signals is a booming area of research as our ability to record brain waves steadily improves and machine learning approaches making the decoding process ever easier.

One of the most compelling use cases is helping those who have lost their voices due to injury or disease speak again. For a long time, the best we’ve been able to do on this front is the kind of device used by renowned physicist Stephen Hawking, where the user selects letters or words from a screen using movements of whatever muscles they can still control at just a few words per minute.

But now scientists at the University of California, San Francisco have demonstrated a way to translate signals recorded from the brain into broadly intelligible sentences.

The researchers, whose work was published in Nature last week, took a novel approach to solving the problem. Rather than trying to directly translate brain signals into audio, they used them as instructions to control movements in a simulated vocal tract before a synthesizer converted those movements into speech.

The study was carried out on epilepsy patients who had had electrodes implanted in their brains to monitor seizures. Researchers got five volunteers to read several hundred sentences aloud while they recorded both the audio and neural activity in regions involved in controlling the movements that occur during speech.

Training the system was a multi-stage process. First, the researchers processed the audio using a previously published model that infers the physical movement of the lips, tongue, or jaw used to produce the sounds.

A neural network was then trained on the output of this model, and the patients’ neural recordings effectively created a virtual vocal tract for each participant that could map their brain signals to the movements involved in speech. They then trained a second neural network on the spoken audio and the output from the first network to learn what sounds each set of movements corresponded to.

The result is a neural decoder that can take continuous brain signals, translate them into physical movements in a virtual vocal tract, and then decode these movements to create synthesized sentences that broadly match the spoken audio.

The system isn’t perfect. To test the accuracy of the approach, the researchers got crowd-sourced workers from Amazon Mechanical Turk to assess the output, but even when they were given a selection of just 25 words to choose from, they successfully transcribed the full sentence less than half the time. Around 70 percent of words were intelligible, though, which is considerably better than the 0 percent of words the target audience can currently get across.

The biggest limitation at the minute is the fact that the tests were done on those who aren’t speech impaired. The team did get a participant to mime sentences by moving their mouth without speaking and showed the decoder could still synthesize speech, though less accurately.

But it’s not clear if it would work for those who have lost the ability to move their vocal tract or never had it in the first place. While this approach has produced the most impressive results to date, it’s possible that other recent approaches that have attempted to decode speech directly from the auditory cortex may prove more useful in the long run.

Another stumbling block is that the approach requires people to have invasive surgery to install electrodes in their brains for the approach to work. Experts agree that external recordings of brain signals using EEG headsets simply can’t capture detailed enough recordings for this kind of application.

But there are some well-funded startups, like Kernel and Neuralink, working on a new generation of more seamless and flexible brain-machine interfaces. These are aimed initially at medical applications, but the long-term goal is to turn them into a consumer device, so it may not be long before mind-reading machines are a reality.

Image Credit: adike / Shutterstock.com

Edd Gent
Edd Genthttp://www.eddgent.com/
I am a freelance science and technology writer based in Bangalore, India. My main areas of interest are engineering, computing and biology, with a particular focus on the intersections between the three.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured