The human mind is already pretty open to manipulation—just ask anyone who works in advertising. But neural implant technology could potentially open up a direct digital link to our innermost thoughts that could be exploited by hackers.
In recent months, companies like Elon Musk’s Neuralink, Kernel, and Facebook have unveiled plans to create devices that will provide a two-way interface between human brains and machines.
While these devices could undoubtedly bring many benefits, they would be networked to computers and therefore essentially part of the Internet of Things. That should immediately set off alarm bells for anyone paying attention to cybersecurity news.
There have been repeated warnings in recent years about the huge number of vulnerabilities in “smart” devices designed by consumer goods companies with little experience of or consideration for cybersecurity.
One would expect that the added sensitivity of a device set to be integrated into people’s bodies would warrant more caution. But it has already been demonstrated that it is possible to hack medical implants to harm patients, and there seems to be no reason the same wouldn’t be true of neural implants.
Deep brain stimulation implants are already being used to treat diseases like Parkinson’s and chronic pain, but he warned that hackers could gain control of the device and alter stimulation settings to cause pain or inhibit movement.
Even with these comparatively simple devices, a determined and technically competent attacker could carry out more advanced attacks that could alter the victim’s behavior in crude ways, Pycroft said.
Future neural implants designed from the bottom up to interface with our cognitive processes may make far more nuanced and sophisticated hacks possible. Earlier this month it was shown that a neural headset could be used to guess someone’s PIN. How much more intimate would the access be if we were talking about an invasive neural implant like the one Elon Musk has proposed?
While a targeted attack on a neural implant designed to manipulate someone’s behavior is unlikely to be worth the effort for most hackers, a bigger threat may be dumb malware that spreads to thousands of devices. Spyware could be used to access highly sensitive personal information, and a neural implant locked by ransomware is not as easy to replace as a laptop.
Perhaps, though, it’s not hackers we should be worrying about. Edward Snowden’s revelations about the NSA’s PRISM surveillance program in 2013 demonstrated wide-ranging collusion between the security services and technology companies to intercept the supposedly secure communications of innocent citizens.
It’s hard to imagine the spooks would pass up the opportunity to do the same with neural implants, and once that threshold has been crossed, it would likely be a short leap to taking advantage of the two-way nature of these future devices to subtly influence people’s behavior.
Even if you trust your government not to abuse these capabilities, the leak of a massive cache of hacking tools stockpiled by the NSA suggests they may not be the only ones with access.
And it’s hard to imagine that the tech companies building these devices don’t know where the back doors are. Facebook, one of the companies developing neural technology, has already been caught carrying out questionable psychological experiments that altered users’ emotions without their permission.
However, this example also highlights that it may not be necessary for us to install neural implants to make our brains susceptible to hacking. The media—be that newspapers, advertisers or Hollywood—have long been accused of manipulating the way we think.
With the rise of social media there are now a host of new tools for those looking to influence the zeitgeist: fake news websites, swarms of Twitter bots, and targeted advertising based on psychological profiles drawn from our internet behavior.
One researcher recently showed they could read neural responses to subliminal images embedded in a game. The information is crude, but they said it could be scaled up to mind-reading level capabilities if combined with other technology, like VR or wearable devices.
So, whether it’s through neural implants or clever social engineering, it seems technology is already challenging the sanctity of our mental processes. Just last month I reported on calls from neuroethicists to introduce new human rights designed to protect our mental privacy.
While new rights would be welcome, more pressing is the need to ensure that cybersecurity is built into future neural devices from the outset, and from the bottom up.
Image Credit: Shutterstock