This Mind-Reading Cap Can Translate Thoughts to Text Thanks to AI

Wearing an electrode-studded cap bristling with wires, a young man silently reads a sentence in his head. Moments later, a Siri-like voice breaks in, attempting to translate his thoughts into text, “Yes, I’d like a bowl of chicken soup, please.” It’s the latest example of computers translating a person’s thoughts into words and sentences.

Previously, researchers have used implants surgically placed in the brain or bulky, expensive machines to translate brain activity into text. The new approach, presented at this week’s NeurIPS conference by researchers from the University of Technology Sydney, is impressive for its use of a non-invasive EEG cap and the potential to generalize beyond one or two people.

The team built an AI model called DeWave that’s trained on brain activity and language and linked it up to a large language model—the technology behind ChatGPT—to help convert brain activity into words. In a preprint posted on arXiv, the model beat previous top marks for EEG thought-to-text translation with an accuracy of roughly 40 percent. Chin-Teng Lin, corresponding author on the paper, told MSN they’ve more recently upped the accuracy to 60 percent. The results are still being peer-reviewed.

Though there’s a long way to go in terms of reliability, it shows progress in non-invasive methods of reading and translating thoughts into language. The team believes their work could give voice to those who can no longer communicate due to injury or disease or be used to direct machines, like walking robots or robotic arms, with thoughts alone.

Guess What I’m Thinking

You may remember headlines about “mind-reading” machines translating thoughts to text at high speed. That’s because such efforts are hardly new.

Earlier this year, Stanford researchers described work with a patient, Pat Bennett, who’d lost the ability to speak due to ALS. After implanting four sensors into two parts of her brain and extensive training, Bennett could communicate by having her thoughts converted to text at a speed of 62 words per minute—an improvement on the same team’s 2021 record of 18 words per minute.

It’s an amazing result, but brain implants can be risky. Scientists would love to get a similar outcome without surgery.

In another study this year, researchers at the University of Texas at Austin turned to a brain-scanning technology called fMRI. In the study, patients had to lie very still in a machine recording the blood flow in their brains as they listened to stories. After using this data to a train an algorithm—based in part on ChatGPT ancestor, GPT-1—the team used the system to guess what participants were hearing based on their brain activity.

The system’s accuracy wasn’t perfect, it required heavy customization for each participant, and fMRI machines are bulky and expensive. Still, the study served as a proof of concept that thoughts can be decoded non-invasively, and the latest in AI can help make it happen.

The Sorting Hat

In Harry Potter, students are sorted into school houses by a magical hat that reads minds. We muggles resort to funny looking swim caps punctured by wires and electrodes. Known as electroencephalograph (EEG) caps, these devices read and record the electrical activity in our brains. In contrast with brain implants, they require no surgery but are considerably less accurate. The challenge, then, is to separate signal from noise to get a useful result.

In the new study, the team used two datasets containing eye-tracking and EEG recordings from 12 and 18 people, respectively, as they read text. Eye-tracking data helped the system slice up brain activity by word. That is, when a person’s eyes flit from one word to the next, it means there should be a break between the brain activity associated with that word and the activity that ought to be correlated with the next one.

They then trained DeWave on this data, and over time, the algorithm learned to associate particular brain wave patterns with words. Finally, with the help of a pre-trained large language model called BART—fine-tuned to understand the model’s unique output—the algorithm’s brain-wave-to-word associations were translated back into sentences.

In tests, DeWave outperformed top algorithms in the category in both the translation of raw brain waves and brain waves sliced up by word. The latter were more accurate, but still lagged way behind translation between languages—like English and French—and speech recognition. They also found the algorithm performed similarly across participants. Prior experiments have tended to report results for one person or require extreme customization.

The team says the research is more proof large language models can help advance brain-to-text systems. Although they used a relatively antique algorithm in the official study, in supplementary material they included results from larger models, including Meta’s original Llama algorithm. Interestingly, the larger algorithms didn’t improve results much.

“This underscores the complexity of the problem and the challenges of bridging brain activities with LLMs,” the authors wrote, calling for more nuanced research in the future. Still, the team hopes they can push their own system further, perhaps up to 90 percent accuracy.

The work shows progress in the field.

“People have been wanting to turn EEG into text for a long time and the team’s model is showing a remarkable amount of correctness,” the University of Sydney’s Craig Jin told MSN. “Several years ago the conversions from EEG to text were complete and utter nonsense.”

Image Credit: University of Technology Sydney

Jason Dorrier
Jason Dorrier
Jason is editorial director of Singularity Hub. He researched and wrote about finance and economics before moving on to science and technology. He's curious about pretty much everything, but especially loves learning about and sharing big ideas and advances in artificial intelligence, computing, robotics, biotech, neuroscience, and space.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured