Machines Teach Humans How to Feel Using Neurofeedback

brain scans, artificial intelligence, futurism, technology, fMRI, artificial intelligence, AI
Humans are social animals, and feelings of attachment, connection and empathy are the glue that binds societies together. Before an infant’s immune system is fully operational, before a baby can even use its hands, it recognizes its parents’ voices, responds uniquely to human faces and even, incredibly, smiles back.

Yet, some people, often as the result of traumatic experiences or neglect, don’t experience these fundamental social feelings normally. Could a machine teach them these quintessentially human responses? A thought-provoking Brazilian study recently published in PLoS One suggests it could.

Researchers at the D’Or Institute for Research and Education outside Rio de Janeiro, Brazil, performed functional MRI scans on healthy young adults while asking them to focus on past experience that epitomized feelings of non-sexual affection or pride of accomplishment. They set up a basic form of artificial intelligence to categorize, in real time, the fMRI readings as affection, pride or neither. They then showed the experiment group a graphic form of biofeedback to tell them whether their brain results were fully manifesting that feeling; the control group saw the meaningless graphics.

fMRI, brain scans, mental health, artificial intelligence, AI, medicineThe results demonstrated that the machine-learning algorithms were able to detect complex emotions that stem from neurons in various parts of the cortex and sub-cortex, and the participants were able to hone their feelings based on the feedback, learning on command to light up all of those brain regions.

Jorge Moll, the lead researcher, told Singularity Hub that the participants weren’t beating the system by faking feelings, because that would lead to its own fMRI pattern. They were learning to feel a particular emotion more completely.

Moll tried the system out as the researchers were setting the experiment up and told us what it was like.

“One can clearly ‘grasp’ the feeling and its effect on the feedback at some point. I used a personal memory involving my little kids, and found that it worked best if I focused on the hugging act; that gave me the best stability of the feeling state and the best response from the visual feedback,” he said.

Here we must pause to note that the experiment’s artificial intelligence system’s likeness to the “empathy box” in “Blade Runner” and the Philip K. Dick story on which it’s based did not escape the researchers. Yes, the system could potentially be used to subject a person’s inner feelings to interrogation by intrusive government bodies, which is really about as creepy as it gets. It could, to cite that other dystopian science fiction blockbuster, “Minority Report,” identify criminal tendencies and condemn people even before they commit crimes.

But its most immediate application would be as a therapeutic aid to help people with personality disorders and other hard-to-treat mental health problems. By themselves, brain scans cannot diagnose mental illness. But in the hands of a mental health provider, if the system found a lack of empathy — popularly considered the hallmark of a sociopath — it could identify people in need of help before the criminal justice system does. Learning to recognize and control feelings could also help addicts of all stripes.

That’s the use case the researchers envision.

“I see it first being further developed for simpler application — such as for increasing well-being, personal growth and for improving interpersonal relationships/attitudes in non-clinical populations — and then being tested and adapted for clinical settings,” Moll said.

The Brazilian approach is not the first to attempt to identify human emotions with computer intelligence. Singularity Hub has reported on visual algorithms that detect simpler emotions, such as disgust, fear and joy, based on facial expression. Those also came out of work to coach people who struggle to understand and respond to otherwise universal emotional communiqués — specifically, children with autism. The algorithms eventually also became a tool in the ad man’s arsenal. A voice-based emotion detection system informs some customer service departments.

The new study closes the loop, teaching machines how to identify human emotions from an fMRI and, in turn, having them teach the humans to control their own emotional responses. Whether that will lead to “Blade Runner” or timely help for the Elliot Rodgers of the world will depend on the social and legal norms societies set up in the meantime.

Photos: D’Or Institute for Research and Education, National Institutes of Health

Cameron Scott
Cameron Scott
Cameron received degrees in Comparative Literature from Princeton and Cornell universities. He has worked at Mother Jones, SFGate and IDG News Service and been published in California Lawyer and SF Weekly. He lives, predictably, in SF.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured