Google Glass App Uses Facial Recognition to Read People’s Emotions

reading emotions

It’s no secret that in the digital age, social relationships are changing.

The rise of social media has forced us to rely on the ability to read between the lines more than ever as tweets and Facebook posts can offer only sentence fragments wrapped in emotional ambiguity. As online groups and activities replace real-life communities and events, face time with neighbors, friends, and even family is declining. We’re digitally interfacing continually with people from different cultures, ethnicities, and lifestyles without necessarily having to invest the time it takes to acclimate to their point of view, which often relies on nonverbal communication.

For all the good that our digital devices have brought us, are they affecting our ability to empathize with others? When it comes to understanding how other people are feeling, the answer appears to be yes.

In a recent study posted online, about 50 sixth-graders (ages 11-12) attended a summer camp that prohibited viewing TV, smartphones, or other digital screens while a control group of the same age continued to use their digital devices regularly with an average daily time of 4.5 hours spent with these devices. At the beginning and end of the study, both groups of students were evaluated for their ability to recognize emotions by identifying the feelings in photos depicting happy, sad, angry, or scared faces as well as videos of actors interacting in a scene.

Overall, the camp students performed substantially better at reading people’s emotions than the kids with the devices. Even though the camp lasted only five days, the device-free sixth-graders improved their ability to read emotions by about 30 percent.

The research, which will be published in an upcoming issue of Computers in Human Behavior, was conducted by UCLA Professor of Psychology Patricia Greenfield, who noted, “Decreased sensitivity to emotional cues — losing the ability to understand the emotions of other people — is one of the costs [of digital media]. The displacement of in-person social interaction by screen interaction seems to be reducing social skills.”

It’s tempting to lay the blame for this loss of empathy on the proliferation of mobile devices, video games, and the usual suspect, television. Because we’re no longer in an era when too much screen time is a parent v. child battle — considering the number of adults of all ages who partake in these activities — the results of this study might be alarming. But as with many trends nowadays, we lose one thing to gain another.

In this case, it’s technology that will not only assist those who struggle with identifying emotions, it has the potential to level the playing field for those who’ve benefited from superior emotional intelligence.

A video surfaced recently showcasing software that has been in development at Germany’s Fraunhofer Institute for years intended to read people’s emotions. Dubbed the Sophisticated High-speed Object Recognition or SHORE, the program assesses the person(s) emotions, including happiness, sadness, anger, and surprise, then projects meters into the viewing area in real time.

The software is now being developed as an app for Google Glass as you can see in this video:

To date, the software has been trained by identifying facial features from a database of over 10,000 faces. This is possible because of the universal signals humans make when they display emotions as evidenced recently by a study on the features of the “angry face”. Research in this area is decades old and was recently popularized by the television show Lie to Me, based in part on the microexpressions work of psychologist Paul Ekman.

And SHORE isn’t the only emotion recognition software under development. We’ve written about other efforts to use recognition software to improve ad campaign focus groups or measure the effectiveness of in-store displays and advertisements.

The point is that like it or not, determining how someone feels by reading their face is an algorithmic process, which means computers will ultimately excel where humans run up against inherent limitations.

Facial recognition and understanding nonverbal communication can be a great challenge for certain groups of people. For those with autism spectrum disorder, for example, the gap between what is being said and not said can be a great rift. Others naturally excel at reading people, and this can provide definite advantage in their careers.

But with this tech, things are about to change.

Whether Google Glass ultimately embraces facial recognition apps or not, we’re moving into an era when devices will enhance our emotional intelligence just as they currently augment our cognitive intelligence. Even if the very use of these devices inhibits our natural ability for empathy, it’s certain that the tech will be far superior than human ability in time.

If this future seems unsettling, just remember that the sixth-graders’ ability to read emotions reverted when they underwent a digital holiday. In other words, we can always go back to relying on our brains, however imperfect they are at helping us understand people.

[Media credit: Lawrence Rayner/Flickr, Fraunhofer]

David J. Hill
David J. Hill
David started writing for Singularity Hub in 2011 and served as editor-in-chief of the site from 2014 to 2017 and SU vice president of faculty, content, and curriculum from 2017 to 2019. His interests cover digital education, publishing, and media, but he'll always be a chemist at heart.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured