Anticipating Your Needs: Emotional Intelligence Is the Key to Smarter Gadgets

It’s weekend rush hour. You’re stuck in traffic. You got cut off. You’re increasingly frustrated. Your heart rate and blood pressure skyrocket.

As you’re fuming in traffic, an inconspicuous device on your wrist silently tracks your vitals and sends them wirelessly to the smart appliances in your home.

An algorithm detects you’re angry. The thermostat slightly lowers the room temperature and starts a light breeze. The fridge shifts a cold brew to the front within easy reach. The TV displays that Netflix show you’ve been binging on.

You storm through the door, grab the beer, collapse on the couch and immediately feel better.

Welcome to the future of the Internet of Things, where machines understand what you want and how you feel.

The Clippy Fiasco

Why would we want machines to have emotional intelligence?

According to Dr. Rosalind Picard, director of the Affective Computing Group at the MIT Media Lab, it’s because emotions are an integral part of human-computer interaction.

Studies repeatedly show that humans have a tendency to treat computers as if they were people, even if the computer doesn’t have a face or life-like body, explains Picard in a seminal paper in IEEE. In a survey called “Rage Against the Machine,” over 70% of 1250 UK participants admitted that they swear at their computers, despite being fully aware that they’re inanimate.

But our current “smart” machines are only cognitively smart: they learn, memorize and interpolate. When it comes to emotions, they’re completely blind.

Virtual assistants are an obvious example. Maybe you remember Clippy, the universally despised and ill-fated Microsoft Office assistant with a perpetually sanguine cyberpersonality. Clippy had the annoying tendency of popping up and offering unsolicited advice without knowing the writer’s intent. Instead of helping, the smiling, dancing cartoon often frustrated the user even more.

It’s not that Clippy isn’t intelligent, wrote Picard. It’s a genius about Office, but an idiot about people. It can’t tell that you are frustrated, ignores any signs of increasing annoyance and is stupid about displaying emotions.

Clippy may seem like a dated example, but the problem it represents is far from obsolete. Computers are experts at analyzing our social communications and entertainment preferences (Netflix recommendations, anyone?). Yet even the most sophisticated system can’t tell if it has upset an important customer or lost a long-time user.

“As technology is increasingly applied to situations where it must interact with everyone,” writes Picard, “it is all the more vital that it does so in a way that is courteous and respectful of people’s feelings.”

You might be thinking: boo-hoo, who cares about overly sensitive people? Picard, as well as others in the affective computation field, respectably disagrees.

It’s not just about making human-machine interaction easier, stresses Picard. Having emotional intelligence means that our non-feeling machines will be able to dissect our inner states simply by observation. Think Vulcan.

Imagine the possibilities, said Dr. Andrew Moore, dean of the Carnegie Mellon School of Computer Science. Virtual psychologists may help diagnose depression by analyzing facial expressions. Software may inform teachers whether students are engaged and fine-tune the lesson plan accordingly. Marketers may better assess interest in ads and products. GPS systems may clarify directions if they sense the driver is confused. Fitness trackers could know when to drill you to work harder and when to ease off. Health monitors may inform doctors of a patient’s discomfort.

And yes, the Amazon Echo in your living room may offer suggestions and play music that cater to your mood.

“We’ve been working a long time on such capabilities and it seems that we are on the brink of some major breakthroughs,” wrote Moore. “ I anticipate that 2016 will be a watershed year and…emotion will become a powerful new channel for interacting with our machines.”

Emotion Analytics

Unlike objective datasets, human emotions are devilishly finicky things.

Even cognitive psychologists haven’t figured out a great way to classify emotions, laughed Picard. Boredom, for example, isn’t considered a “basic emotion,” but we all know how integral it is to our lives.

So far, research in the area has focused on quantifying the telltale signs we display — consciously or subconsciously — in different emotional states.

Gait, posture, heart rate, skin conductance and tone of voice are all windows into a person’s inner emotional landscape. Scientists have begun tapping this wealth of information. Beyond Verbal, a startup based in Israel, analyzes people’s vocal inflections to determine their emotional states. Microsoft’s Kinect tracks its players’ heartbeats and physical movements to better understand how they’re feeling as they play.

Yet by far, the most fruitful systems analyze facial expressions.

Only recently has the task become even remotely possible, explained Moore. For one, we have better cameras that tease out minute changes in our facial muscles. For another, we now have large facial expression video datasets and powerful machine learning algorithms to help us out.

At Carnegie Mellon University, researchers have developed IntraFace, a small piece of software that tracks facial features and analyzes expressions. The software is crazy efficient — it can smoothly run on a smartphone.

Other groups are combining multiple signals to analyze emotions with higher accuracy. Dr. Jeff Cohn at the University of Pittsburgh, for example, uses facial expressions and tone of voice to determine whether anti-depression treatments are working in depressed patients. Aldebaran’s Pepper, an “emotionally smart” robotic sales clerk, jokes with its customers through integrated HD cameras and microphones and gauges their reactions as it tries to make a sale.

Then there’s the tantalizing possibility machine learning could help us uncover better ways to quantify and analyze emotions using markers we aren’t aware of.

One group is working to detect fleeting changes in facial expressions that often reveal our deepest emotion, even if we try to hide it. Last year, they reported their algorithm outperformed humans in spotting microexpressions, and they hope to use their new technology in lie detection.

As it stands, there’s a lot of room for error. Some emotions — like boredom or loneliness — are hard to pick up based on physiological signs alone. Others may share similar symptoms and are hard to tease apart.

Of course, the slew of ethical and privacy concerns are mind-boggling.

People feel queasy having their emotions uncovered and stored somewhere in a database, explains Picard. It’s for good reason — this is data that is incredibly private and revealing.

The data could also fuel discrimination. For example, could an employer reprimand a worker if emotion profiling shows constant daydreaming on the job? Could insurance companies demand higher fees if a customer’s “mood data” suggests illness or depression?

Our Internet of Things can’t understand us emotionally just yet. But the time to talk about it starts now.

Image Credit: Shutterstock.com

Shelly Fan
Shelly Fanhttps://neurofantastic.com/
Shelly Xuelai Fan is a neuroscientist-turned-science writer. She completed her PhD in neuroscience at the University of British Columbia, where she developed novel treatments for neurodegeneration. While studying biological brains, she became fascinated with AI and all things biotech. Following graduation, she moved to UCSF to study blood-based factors that rejuvenate aged brains. She is the co-founder of Vantastic Media, a media venture that explores science stories through text and video, and runs the award-winning blog NeuroFantastic.com. Her first book, "Will AI Replace Us?" (Thames & Hudson) was published in 2019.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured