4 New Human Rights for When Our Brains Are Hooked Up to Computers

The human-machine mind meld is just around the corner if you believe the buzz coming out of Silicon Valley these days. But neuroethicists worry the technology poses a threat to the last bastion of privacy, our innermost thoughts, and have suggested tweaks to our fundamental human rights to protect that privacy.

Elon Musk made waves last month when it was revealed that he had launched a new company called Neuralink, aimed at building brain-computer interfaces (BCI) that would allow us to “telepathically” communicate with machines.

The tech billionaire has been talking about the need to avert the existential threat of artificial intelligence by merging with machines for some time, but he’s now put his money where his mouth is and set an ambitious target of having healthy people installing these devices as a consumer product within the decade.

Earlier this month, the head of Facebook’s Building 8 research group, Regina Dugan, said they are also working on this kind of neural technology, though they want to create a non-invasive headset rather than an implant. They envisage people being able to use their thoughts to control a cursor in augmented reality or type 100 words per minute.

“The technology is coming and is likely to have dramatic implications for privacy, consent and individual agency.”

Let’s be clear, the timescales these companies have outlined are wildly optimistic, not least due to the fact that even the world’s top neuroscientists barely understand human cognition yet. Nonetheless, the technology is coming and is likely to have dramatic implications for privacy, consent and individual agency.

That’s why Marcello Ienca, a neuroethicist at the University of Basel, and Roberto Andorno, a human rights lawyer at the University of Zurich, have outlined four new human rights in the journal Life Sciences, Society and Policy designed to protect us from the potential pitfalls.

“While the body can easily be subject to domination and control by others, our mind, along with our thoughts, beliefs and convictions, are to a large extent beyond external constraint,” they write. “Yet, with advances in neural engineering, brain imaging and pervasive neurotechnology, the mind might no longer be such an unassailable fortress.”

1. The Right to Cognitive Liberty

The first proposed new right is the right to “Cognitive Liberty,” which states that people have the right to use emerging neurotechnology to modify their mental activity. But it also protects the right to refuse to use it in situations such as an employer requiring workers to take advantage of devices that would improve their performance.

2. The Right to Mental Privacy

Second on the list is the right to “Mental Privacy,” which would protect people from third parties accessing data about their mental activity collected by a neurotechnology device without their consent.

The impulse for this protection is obvious; tech giants are already hoovering up huge amounts of our behavioral data in their efforts to divine our innermost desires and sell us stuff. Brain data could let them bypass this guesswork and precisely tailor our online experiences in pursuit of their goals.

The authors debate whether this right should be absolute or relative, though. In certain situations, allowing the state to access the thoughts of criminals and terrorists could have obvious benefits for society. But the researchers suggest this could erode the already well-established right not to incriminate oneself, which is widely recognized across the democratic world and enshrined in the Fifth Amendment.

3. The Right to Mental Integrity

The last two rights are intertwined and deal with the emerging ability to not just record mental activity, but directly influence it. The right to “Mental Integrity” effectively protects against people hacking brain implants to hijack or interfere with their mental processes or erase memories.

4. The Right to Psychological Continuity

The right to “Psychological Continuity” deals with the vaguer notion of attempts to alter someone’s personality or identity, either through similar brain hacking approaches or more subtle ones like neuromarketing, which can involve companies using insights from neuroscience to try and alter unconscious behavior and attitudes in consumers.

These proposals raise some important issues that will have to be tackled as neurotechnology becomes increasingly common. However, it remains debatable whether the invention of new human rights is the best way to tackle them.

The researchers themselves raise the problem of so-called “rights inflation,” where the push to label anything that is morally desirable as a fundamental right waters down the meaning of those already in place.

While they offer a defense, it is not entirely clear why existing rights to privacy and accompanying data protection laws would not be equally applicable to the personal and medical data collected by neurotechnology devices. Similarly, it could be argued that the final two rights overlap to the point where it may make more sense to combine them.

Either way, though, the paper cuts through the utopian futurism that has surrounded emerging neurotechnology in recent months by highlighting the potential dangers and opening up discussion on how best to tackle them.

“It’s always too early to assess a technology until it’s suddenly too late.”

The technology may still be some way off, but as Ienca told The Guardian, it’s best to be prepared. “We cannot afford to have a lag before security measures are implemented,” he said. “It’s always too early to assess a technology until it’s suddenly too late.”

Image Credit: Shutterstock

Edd Gent
Edd Genthttp://www.eddgent.com/
Edd is a freelance science and technology writer based in Bangalore, India. His main areas of interest are engineering, computing, and biology, with a particular focus on the intersections between the three.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured