Facial Recognition Software Distinguishes Between Real And Phony Smiles

MIT researchers know when you're smiling for real (left) or out of frustration (right), but odds are you can't tell.

Con-artists, deceivers, and fakers take note: feigning emotion to manipulate others is about to get a lot harder. Researchers at the MIT Media Lab have developed software that can differentiate between a genuinely delighted smile and one born from frustration. It turns out that the majority of people unknowingly smile to cope with frustration, and others may interpret those smiles as genuine. But what’s the real difference?

By analyzing video, the researchers discovered it’s all in the timing: genuine smiles develop gradually, but frustrated smiles emerge rapidly and dissipate fast. In the study, humans and computers were equally capable of identifying genuine smiles, but when it came to frustrated smiling, people misread the coping smile half the time whereas the algorithm correctly identified the smile with 92 percent accuracy.

The motive behind the research is to help individuals who have difficulty interpreting face-to-face communication, such as those on the autistic spectrum, but the research also has some profound implications for artificial intelligence.

The study, which was recently published in the IEEE Transactions on Affective Computing, (unfortunately, the article is behind a paywall, but you can access the abstract here) details two different experiments. First, male and female participants were asked to act out delight and frustration with an activity. Second, subjects were to fill out an online form that was designed to elicit natural frustration (using commonly aggravating computer experiences such as time outs, CAPTCHAs, and disabled copy-and-paste) followed by delight (by showing a popular YouTube video of a baby laughing).

In both experiments, a webcam recorded facial expressions throughout the activity. Then the video data were analyzed using Google’s facial feature tracker (from its 2006 acquisition of Neven Vision) that measures 22 different points and processed mathematically into an interpretative model to differentiate genuine and phony smiles.

The study revealed some curious findings about smiling. For instance, there is a clear difference in speed and intensity of smiles. Delighted smiles lasted an average of 13.8 seconds from beginning to end, whereas frustrated smiles took nearly half the time (7.5 seconds). Furthermore, delighted smiles were 60 percent more intense than frustrated ones.

Another revelation is it seems that the majority of people are unaware that they smile to cope with frustration, which would explain why 90 percent smiled naturally when faced with a frustrating task but when they were asked to feign frustration, 90 percent didn’t smile…almost as if the memory of someone being frustrated ignores the presence of a smile because it is deemed inconsistent with the interpreted emotion. This is also supported by how easily human’s misidentified frustrated smiles.

The researchers concluded that the most helpful component of determining emotion is the dynamic pattern in a smile’s evolution over time, rather than whether a smile is present or not.

Which expression is the delighted smile? (Answer: The ones at the top)

What’s awesome about this research is that it both provides insight into human behavior and successfully creates a computer model that can improve a person’s ability to communicate.

Consider those who are on the autistic spectrum. They often struggle with nonverbal communication skills, especially when it comes to interpreting facial expressions because of a tendency to look for supporting clues about emotions from different parts of the face than people not on the spectrum (image to the right is from Soc Cogn Affect Neurosci). To help them, common therapies include teaching a set of basic emotions and providing simple rules-of-thumb, such as “a smile means someone is happy.”

Clearly, the present research shows how misleading this simplified approach can be and suggests that more sophisticated software can be developed to assist these individuals by teaching them to read how quickly a smile develops. This technology could be increasingly important in the near future as the CDC reports that 1 in 88 children are on the autistic spectrum and, statistically, the incidence of autism is rising.

The kind of nuances in smiling that are teased out in this research could even be applied to differences in cultures, in which smiling plays a complex role across the globe.

In terms of the model the team developed, a possible area of application is in improving facial recognition software to determine mood. As facial analysis is becoming more widespread (Facebook can already detect emotion) and advanced (such as the surveillance system that scans 36 million images a second), programs could be developed that would help computers to detect inconsistencies in behavior and subtle facial expressions that might be confusing to or lost on the majority of humans. Along with the age detection algorithm recently developed by Face.com, a mood detector could have important implications in police investigations, on one hand, or could be used in emerging augmented-reality technology like Google Glass to assist people in detecting people’s moods before engaging in conversation.

Inconsistencies between facial expressions and intent are unfortunately a ripe area for used car salesman, marketing schemes, and con artists, so software that can aid victims in ascertaining the authenticity of face-to-face communication could reduce getting scammed.

Taking it one step further, this algorithm can be a boon to the artificial intelligence and robotics community that hopes to shrink the Uncanny Valley, a psychological effect that can make more realistic looking robots off-putting to some people. As more of the nuances in both human behavior and how humans interpret behavior are resolved, increasingly sophisticated emotions can be emulated in a robot making them more human-like and relatable, which will be essential to their widespread adoption.

To hear more about the research from the MIT team, check out the video:

[Media: YouTube]

[Sources: MashableMIT NewsSoc Cogn Affect Neurosci via Autist’s Corner]

David J. Hill
David J. Hill
David started writing for Singularity Hub in 2011 and served as editor-in-chief of the site from 2014 to 2017 and SU vice president of faculty, content, and curriculum from 2017 to 2019. His interests cover digital education, publishing, and media, but he'll always be a chemist at heart.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured