Scientists Connect Brain to a Basic Tablet—Paralyzed Patient Googles With Ease

1073 11 Loading

For patient T6, 2014 was a happy year.

That was the year she learned to control a Nexus tablet with her brain waves, and literally took her life quality from 1980s DOS to modern era Android OS.

A brunette lady in her early 50s, patient T6 suffers from amyotrophic lateral sclerosis (also known as Lou Gehrig’s disease), which causes progressive motor neuron damage. Mostly paralyzed from the neck down, T6 retains her sharp wit, love for red lipstick and miraculous green thumb. What she didn’t have, until recently, was the ability to communicate with the outside world.

Brain-Machine Interfaces

Like T6, millions of people worldwide have severe paralysis from spinal cord injury, stroke or neurodegenerative diseases, which precludes their ability to speak, write or otherwise communicate their thoughts and intentions to their loved ones.

bluetooth-brain-tablet-101The field of brain-machine interfaces blossomed nearly two decades ago in an effort to develop assistive devices to help these “locked-in” people. And the results have been fantastic: eye- or head-tracking devices have allowed eye movement to act as an output system to control mouse cursors on computer screens. In some cases, the user could also perform the click function by staring intently at a single spot, known in the field as “dwell time.”

Yet despite a deluge of promising devices, eye-tracking remains imprecise and terribly tiring to the users’ eyes. Since most systems require custom hardware, this jacks up the price of admission, limiting current technology to a lucky select few.

“We really wanted to move these assisted technologies towards clinical feasibility,” said Dr. Paul Nuyujukian, a neuroengineer and physician from Stanford University, in a talk at the 2015 Society for Neuroscience annual conference that took place this week in Chicago.

That’s where the idea of neural prostheses came in, Nuyujukian said.

In contrast to eye-trackers, neural prostheses directly interface the brain with computers, in essence cutting out the middleman — the sensory organs that we normally use to interact with our environment.

Instead, a baby-aspirin-sized microarray chip is directly implanted into the brain, and neural signals associated with intent can be decoded by sophisticated algorithms in real time and used to control mouse cursors.

It’s a technology that’s leaps and bounds from eye-trackers, but still prohibitively expensive and hard to use.

Nuyujukian’s team, together with patient T6, set out to tackle this problem.

A Nexus to Nexus 9

Two years ago, patient T6 volunteered for the BrainGate clinical trials and had a 100-channel electrode array implanted into the left side of her brain in regions responsible for movement.

At the time, the Stanford subdivision was working on a prototype prosthetic device to help paralyzed patients type out words on a custom-designed keyboard by simply thinking about the words they want to spell.

The prototype worked like this: the implanted electrodes recorded her brain activity as she looked to a target letter on the screen, passed it on to the neuroprosthesis, which then interpreted the signals and translated them into continuous control of cursor movements and clicks.

In this way, T6 could type out her thoughts using the interface, in a way similar to an elderly technophobe reluctantly tapping out messages with a single inflexible finger.

The black-and-white setup was state-of-the-art in terms of response and accuracy. But the process was painfully slow, and even with extensive training, T6 often had to move her eyes to the delete button to correct her errors.

What the field needed was a flexible, customizable and affordable device that didn’t physically connect to a computer via electrodes, according to Nuyujukian. We also wanted a user interface that didn’t look like it was designed in the 80s.

The team’s breakthrough moment came when they realized their point-and-click cursor system was similar to finger tapping on a touchscreen, something most of us do everyday.

bluetooth-brain-tablet-10We were going to design our own touchscreen hardware, but then realized the best ones were already on the market, laughed Nuyujukian, so we went on Amazon instead and bought a Nexus 9 tablet.

The team took their existing setup and reworked it so that patient T6’s brain waves could control where she tapped on the Nexus touchscreen. It was a surprisingly easy modification: the neuroprosthetic communicated with the tablet through existing Bluetooth protocols, and the system was up and running in less than a year.

“Basically the tablet recognized the prosthetic as a wireless Bluetooth mouse,” explained Nuyujukian. We pointed her to a web browser app and told her to have fun.

In a series of short movie clips, the team demonstrated patient T6 Googling questions about gardening, taking full advantage of the autocompletion feature to speed up her research. T6 had no trouble navigating through tiny links and worked the standard QWERTY keyboard efficiently.

Think about it, said Nuyujukian, obviously excited. It’s not just a prettier user interface; she now has access to the entire Android app store.

According to previous studies, the device can function at least two years without experiencing any hardware or software issues. The team is trying to make the implant even sturdier to extend its lifespan in the brain.

“We set out to utilize what’s already been perfected in terms of the hardware to make the experience more pleasant,” said Nuyujukian. “We’ve now showed that we can expand the scope of our system to a standard tablet.”

But the team isn’t satisfied. They are now working on ways to implement click-and-drag and multi-touch maneuvers. They also want to expand to other operating systems, enable the patients to use the device 24/7 without supervision, and expand their pilot program to more patients in all three of the BrainGate clinical sites.

“Our goal is to unlock the full user interface common to general-purpose computers and mobile devices,” said Nuyujukian. “This is a first step towards developing a fully-capable brain-controlled communication and computer interface for restoring function for people with paralysis.”

Image Credit:

Shelly Fan

Shelly Xuelai Fan is a neuroscientist at the University of California, San Francisco, where she studies ways to make old brains young again. In addition to research, she's also an avid science writer with an insatiable obsession with biotech, AI and all things neuro. She spends her spare time kayaking, bike camping and getting lost in the woods.

Discussion — 11 Responses

  • genidma October 25, 2015 on 1:51 pm

    BCI/BMI (Brain computer/Machine Interface) needs to go mainstream in the near future (4 to 6 years). Hardware could become transparent with periodic (hardware based) updates from time to time.

    My sense is that for the tech to go mainstream, for now, the modem (that mini aspirin), needs to work in a non-invasive fashion.

    1. The evolution of sensors (and in some respects, actuators) translates into the effective emergence of ubiquitous computing.
    2. Ubiquitous computing in the form of a ‘platform’ would open up the possibilities for another diverse set of technologies that we could then build on top of.
    3. We can move towards such a reality today. We can and probably will begin with NLP (Natural Language Processing) and quickly mature it’s way towards computing at the speed of thought. (BCI/BMI). Lots of results come up when you Google the term ‘Google ubiquitous computing’. Within the next couple of months/years, we could see the emergence of a device, which is passively listening to your voice based commands and goes into an active mode upon the utterance of certain words/phrases.
    4. In a near future state, sensors will evolve and would then be sensing for a combination of a) thought patterns b) emotional patterns. But this will also be a reality whereby the surroundings will be that much more intelligent and the series of sub-problems would have been thought/worked through (security, enabling security based on proximity e.t.c)
    5. The thought based UI is going to be invoked based on specific thought markers. a) User seeks information b) User seeks information on this topic c) User seeks information to be received via this mechanism.

    Whoever solves this problem and the series of sub-problems brings about computing at the speed of thought. (Focus on cursor)
    – So either cursor follows you (a), or can be invoked in certain physical locations (b). For (a), it need could be combination of sensors and actuators that interact with the wider environment. This, below (dropbox link) is a dumb example and it need not be this big and obtrusive For (b), computing would be available in specific areas, up and until we start beaming connectivity through Space and then through a network of blimps/aerostats spread throughout a geographical area.
    – I think, I am not sure about this one, but we can do a lot more if we put in the cycles in order to upgrade TCP/IP and build some inherent intelligence within it’s framework. I am not sure, because we could get to the intended output with a series of advanced sensors and actuators.
    – More thoughts on BCI/BMI/HCI and why more design based thinking is required (Because the tech is getting very ready) –
    – Sentient and Interactive Surroundings: The combined output will also help us with the powering of new physical realities (Outside of VR/AR): It will help us with the creation of more beautiful physical spaces, without expending an enormous amount of effort displacing the world of atoms. At some point, some level of cost/benefit analysis goes into effect here. Meaning, if x number of people access y environment during z times, then it makes sense to displace atoms. Otherwise, leverage mechanism to temporarily display output that is aesthetically pleasing and/or is catered to the user’s taste.

    Think about it.
    – The usage of keyboard and mouse is basically inertia and it has served it’s purpose. It’s a form of experience that was invented during the 60’s.
    – Our sense of reality is partly governed by our level of sentience. We should, I think, create tools, with this logic at the forefront of our mind.
    – Creation of tools and technologies that help us compute at the speed of thought will be a liberating experience for all the human race.
    – Sentience of all kinds must be able to play and interact on the same level field. Denying those who are unable to use their limbs is a disservice that we do, when we create tools that place an inherent focus on the usage (of the limbs) as such.
    – Not only can ubiquitous computing help improve the quality of a lot of lives. I sincerely believe that this is a multi-trillion dollar industry in the making. I don’t think one player must have a monopoly in this domain. All can and must come and play, interact and innovate.

    • genidma genidma October 25, 2015 on 1:59 pm

      Just a side note/thought. I need to spend more time understanding how TCP/IP really works. But this should not take the focus away from UI/UX/Design and how it comes into effect when we talk about BMI/BCI/HCI.

      That being said, I have a conflicting series of thoughts on whether it is or is not a good idea to bake security into TCP/IP.

      An open platform powers a lot. Baking security could be a limiting factor.

      Maybe security should always be handled on a user/system level. On a user level, I think, multi-factor biometric authentication makes sense.

    • genidma genidma October 25, 2015 on 2:10 pm

      More on the cost/benefit analysis:

      I suspect that the future, for most (not all) would involve less working hours. Which invariably means more time for health/fitness and engaging in activities that contribute to the cultural capital. Meaning, more art, more expression and the user becoming an active and/or passive participant when it comes to contributing to the cultural (and other) capital. The design of the system would incentivize the user engaging in the acquisition of new knowledge. It will also incentivize the above mentioned as part of the overall design of the construct.

      Using examples:
      1. If there is a center in the middle of a city or town that acts as a hub for all kinds of social, scientific and cultural activities. Then it makes sense to displace atoms.
      2. If one the other hand, there is an area in the outskirts that humans largely do not venture into in large numbers on a frequent basis. But what we want is to enable ‘Quests’ through such areas. Meaning elaborate races where humans go through a series of physical tasks and also solve some cognitive tasks as a team. Then we can use AR to beam realities that are more life-like. You could be shooting aliens (the hostile kind) through these races, or working to evade Raptors while you run through the trails. Atoms need not be displaced for such activities. Only electrons and temporarily at that.

    • palmytomo genidma June 28, 2016 on 7:38 pm

      Hi Genidma,
      – I enjoyed reading your several posts. If you want to have a good sounding board so you can develop your thoughts further with an intelligent other person, via Skpe or Google Hangout, email me and we could probably have at least one excellent yak.
      – Although tempted to include other good minds in the discussion, I think that would unfortunately make it less deftly purposeful and progressive.
      – A keen interest of mine in this topic is to look imaginatively beyond the neural implant technology and prostheses, onward and outward into the uses of mind control of all imaginable physical things.
      – That is, everything from a light switch to a waterside crane or drones and robots of all imaginable types and uses.
      – If we had such a discussion using Google Hangouts On Air, it could be recorded for review watching after by us and others.
      Bruce Thomson in New Zealand. [email protected]

  • scidata October 25, 2015 on 1:54 pm

    Hopefully a diving bell to butterfly story.

  • genidma October 25, 2015 on 2:36 pm

    There could be a cognitive equivalent of a thought pattern to what a biomarker is. Meaning, certain thoughts must release a certain marker. But that they must be used in combination and not just related to the thought itself.

    If the pattern of how specific thoughts emerge can be captured, then a lot can be build upon it.

    But the mechanism by which we design the interface, the system and the construct must be built in a way so that the thought patterns of the individual are not influenced in any way. But that, the ‘information that the user seeks’ is brought to the user in a transparent and non-biased way.

    If we design systems where a sub-component is focused on influencing behaviour, then that could, in-effect lead us towards a Borg like mentality. A destroyer of diversity. An existence focused on the subservient focus on the continuity of the collective. Beyond which, there is little else. Such a design, certainly *does not and will not* directly or indirectly promote a sense mystery or a sense of wonder.

    Which would be a shame, as the Universe (with potential multi-verse(s)) have so so much more to offer.

    There is so much more that can be done in order to enable discovery of all sorts. I’ve been thinking about discovery since at least 2011 when I heard Randy Komisar talk about it and how he said, quote that: “search is old and discovery would be the new thing in the future”.

    My sense is that, the continual enablement of discovery is directly proportional to the heterogeneity of thoughts. A diversity that helps power growth in the real sense.

    That in the end, it is the individual that makes the decision. Not getting into the whole debate about genetic disposition and how genes can also influence behaviour. Also, not getting into the topic of free-will.

    Direction matters. Nuances matter. Good enough is not good enough, specially when the issue is the enablement of the future, with a potential of impacting billions of lives.

    I woke up with a thought
    It was a good thought
    So I kept it
    And now I seek to build upon it

    • genidma genidma October 25, 2015 on 2:57 pm

      On the heterogeneity of thoughts:

      “The answer is not to standardize education, but to personalize and customize it to the needs of each child and community. There is no alternative. There never was” Sir Ken Robinson.

      “Lack of a little knowledge can be a dangerous thing” – Eric Drexler.

      Far too many of us wake up with a thought or a collection of ideas that have been a work in progress.

      Far too many of these ideas never materialize for a variety of different reasons. Because work got in the way or life happened.

      We keep saying that innovation is slowing, that this is broken or that is broken.

      The focus should be on seeing more ideas seeing the light of the day.

      What we need is a society built upon the application of ideas. More ideas.

      The more distributed this phenomenon, the better, collectively for our species.

      But the design of the system must be focused on the promotion of a sense mystery and a sense of wonder. And like I often say, without turning the whole thing into zealotry.

    • Derrick Harris genidma October 25, 2015 on 6:34 pm

      Wouldn’t it be great if the programs built into the user interface could communicate to certain neurons and thought patterns to heal different ailments. Programs could be the next type of medication.

  • Yuichiro Hyakuya November 5, 2015 on 2:18 am

    ah its is nice to here that, but .. is it ok that scientist implanted a device to a human brain?/ … is it safe?/ hey think of.. radiation from a television, computer or even other technology can harm human;s health, then this?? a tablet that have a radiation intact to a brain??/ oh.. please answer me is it safe?

    • Steve Foerster Yuichiro Hyakuya December 31, 2015 on 4:45 am

      Safety is good, but it’s not the only consideration. We take a risk everytime we walk put the front door, but it doesn’t stop reasonable people from doing so. In this case, the improvement to quality of life is so profound that I would think nearly anyone would take the chance of a undesirable side effect.

  • Сурен Акопов June 27, 2016 on 11:22 pm

    Made device for reading human thoughts / human mind reading machine / Brain Computer Interface. Discovery is not published. I invite partnership. About the problem look : Jack Gallant; Tom Mitchell and Marcel Just; John – Dylan Haynes; Andrea Stocco and Rajesh Rao, human mind reading machine /