Meta Built an AI That Can Guess the Words You’re Hearing by Decoding Your Brainwaves

Being able to decode brainwaves could help patients who have lost the ability to speak to communicate again, and could ultimately provide novel ways for humans to interact with computers. Now Meta researchers have shown they can tell what words someone is hearing using recordings from non-invasive brain scans.

Our ability to probe human brain activity has improved significantly in recent decades as scientists have developed a variety of brain-computer interface (BCI) technologies that can provide a window into our thoughts and intentions.

The most impressive results have come from invasive recording devices, which implant electrodes directly into the brain’s gray matter, combined with AI that can learn to interpret brain signals. In recent years, this has made it possible to decode complete sentences from someone’s neural activity with 97 percent accuracy, and translate attempted handwriting movements directly into text at speeds comparable to texting.

But having to implant electrodes into someone’s brain has obvious downsides. These risky procedures are only medically justifiable for patients who require brain recording to help resolve other medical issues, such as epilepsy. And neural probes degrade over time, which raises the prospect of having to regularly replace them.

That’s why researchers at Meta’s AI research division decided to investigate whether they could achieve similar goals without requiring dangerous brain surgery. In a paper published on the pre-print server arXiv, the team reported that they’ve developed an AI system that can predict what words someone is listening to based on brain activity recorded using non-invasive brain-computer interfaces.

It’s obviously extremely invasive to put an electrode inside someone’s brain,” Jean Remi King, a research scientist at Facebook Artificial Intelligence Research (FAIR) Lab, told TIME.So we wanted to try using noninvasive recordings of brain activity. And the goal was to build an AI system that can decode brain responses to spoken stories.”

The researchers relied on four pre-existing brain activity datasets collected from 169 people as they listened to recordings of people speaking. Each volunteer was recorded using either magneto-encephalography (MEG) or electro-encephalography (EEG), which use different kinds of sensors to pick up the electrical activity of the brain from outside the skull.

Their approach involved splitting the brain and audio data into three-second-long snippets and feeding it into a neural network that then looked for patterns that could connect the two. After training the AI on many hours of this data, they then tested it on previously unseen data.

The system performed the best on one of the MEG datasets, where it achieved a top-10 accuracy of 72.5 percent. That means that when it ranked the 10 words with the highest probability of being linked to the brain wave segment, the correct word was there 72.5 percent of the time.

That might not sound great, but it’s important to remember that it was picking from a potential vocabulary of 793 words. The system scored 67.2 percent on the other MEG dataset, but fared less well on the EEG datasets, getting top-10 accuracies of only 31.4 and 19.1.

Clearly this is still a long way from a practical system, but it represents significant progress on a hard problem. Non-invasive BCIs have much worse signal-to-noise ratios, so deciphering neural activity this way is challenging, but if successful could result in a far more widely applicable technology.

Not everyone is convinced it’s a solvable problem, though. Thomas Knopfel from Imperial College London told New Scientist that trying to probe thoughts using these non-invasive approaches was like “trying to stream an HD movie over old-fashioned analogue telephone modems,” and questioned whether such approaches will ever reach practical accuracy levels.

Companies like Elon Musk’s Neuralink are also betting that we’ll eventually get over our squeamishness around invasive approaches as the technology improves, opening the door to everyday people getting brain implants.

But the research from the team at Meta is at the very early stages, and there is plenty of scope for improvements. And the commercial opportunities for anyone who can crack non-invasive brain scanning will likely provide plenty of motivation for trying.

Image Credit: Dung Tran from Pixabay

Edd Gent
Edd Genthttp://www.eddgent.com/
Edd is a freelance science and technology writer based in Bangalore, India. His main areas of interest are engineering, computing, and biology, with a particular focus on the intersections between the three.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured