MIT’s New Voiceless Interface Can Read the Words in Your Head

The way we interact with the technology in our lives is getting progressively more seamless. If typing terms or addresses into your phone wasn’t easy enough, now you can just tell Siri to do the search or pull up the directions for you. Don’t feel like getting off the couch to flick a switch, or want your house to be lit up by the time you pull into your driveway? Just tell your Echo home assistant what you want, and presto—lights on.

Engineers have been working on various types of brain-machine interfaces to take this seamlessness one step further, be it by measuring activity in the visual cortex to recreate images, or training an algorithm to “speak” for paralyzed patients based on their brain activation patterns.

Last week at the Association for Computing Machinery’s ACM Intelligent User Interface conference in Tokyo, a team from MIT Media Lab unveiled AlterEgo, a wearable interface that “reads” the words users are thinking—without the users having to say anything out loud.

If you thought Google Glass was awkward-looking, AlterEgo’s not much sleeker; the tech consists of a white plastic strip that hooks over the ear and extends below the jaw, with an additional attachment placed just under the wearer’s mouth. The strip contains electrodes that pick up neuromuscular signals, which are released when the user thinks of a certain word, silently “saying” it inside his or her head. A machine learning system then interprets the signals and identifies which words the user had in mind—and, amazingly, it does so correctly 92 percent of the time.

Arnav Kapur, a graduate student who led AlterEgo’s development, said, “The motivation for this was to build an IA device—an intelligence-augmentation device. Our idea was: Could we have a computing platform that’s more internal, that melds human and machine in some ways and that feels like an internal extension of our own cognition?”

It’s Not All in Your Head

Who knew your face made specific, teeny muscle movements when you think? Isn’t that the fun of it, that there’s no way anyone but you can know what’s in your head?

It turns out we have a system that prepares for physical speech; it’s active even when we don’t say anything out loud, and the preparation extends all the way to our muscles, which give off myoelectric signals based on what they think we’re about to say.

To figure out which areas of our faces give off the strongest neuromuscular signals related to speech, the MIT team had test subjects think of and silently say (also called “subvocalize”) a sequence of words four times, with a group of 16 electrodes placed on different parts of subjects’ faces each time.

Analysis of the resulting data showed that signals from seven specific electrode locations best deciphered subvocalized words. The team fed the data to a neural network, which was able to identify patterns between certain words and the signals AlterEgo had picked up.

More Than Words

Thus far, the system’s abilities are limited to fairly straightforward words; the researchers used simple math problems and chess moves to collect initial data, with the range of users’ vocabularies limited to about 20 possible words. So while its proof of concept is pretty amazing, AlterEgo has a ways to go before it will be able to make out all your thoughts. The tech’s developers are aiming to expand its capabilities, though, and their future work will focus on collecting data for more complex words and conversations.

What’s It For?

While technologies like AlterEgo can bring convenience to our lives, we should stop and ask ourselves how much intrusiveness we’re willing to allow in exchange for just that—convenience, as opposed to need. Do I need to have electrodes read my thoughts while I’m, say, grocery shopping in order to get the best deals, or save the most time? Or can I just read price tags and walk a little faster?

When discussing the usefulness of the technology, Pattie Maes, a professor of media arts and sciences at MIT and Kapur’s thesis advisor, mentioned the inconvenience of having to take out your phone and look something up during a conversation. “My students and I have been experimenting with new form factors and new types of experience that enable people to still benefit from all the wonderful knowledge and services that these devices give us, but do it in a way that lets them remain in the present,” she said.

Thad Starner is a professor at Georgia Tech’s College of Computing. He wasn’t involved in AlterEgo’s creation, but he’s done a lot of work in wearable tech and was closely involved with Google Glass. Starner had some ideas about more utilitarian applications for AlterEgo, pointing out that in high-noise environments, such as on an airport’s tarmac, on the flight deck of an aircraft carrier, or in power plants or printing presses, the system would “be great to communicate with voice in an environment where you normally wouldn’t be able to.”

Starner added, “This is a system that would make sense, especially because oftentimes in these types of or situations people are already wearing protective gear. For instance, if you’re a fighter pilot, or if you’re a firefighter, you’re already wearing these masks.” He also mentioned the tech would be useful for special operations and the disabled.

Gearing research for voiceless interfaces like AlterEgo towards these practical purposes would likely up support for the tech, while simultaneously taming fears of Orwellian mind-reading and invasions of mental privacy. It’s a conversation that will get louder—inside engineers’ heads and out—as progress in the field advances.

Image Credit: Lorrie Lejeune / MIT

Vanessa Bates Ramirez
Vanessa Bates Ramirez
Vanessa is senior editor of Singularity Hub. She's interested in biotechnology and genetic engineering, the nitty-gritty of the renewable energy transition, the roles technology and science play in geopolitics and international development, and countless other topics.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured