Nao Robot Develops Emotions, Learns To Interact With Humans

108 14 Loading
nao-emotion-chip

Nao is getting an emotional upgrade.

Science fiction has taught me to fear and respect robots with emotions. Now scientists are teaching robots to fear and respect humans with emotions. It’s a virtuous circle. Aldebaran’s widely successful Nao robot is being used in experiments led by the University of Hertfordshire that hope to have it learn emotions in the same manner as young children. Using facial and body language recognition, special Nao prototypes will form attachments to those humans which teach them the most. The robots will then pick up on emotional cues and mimic the way they are used. This is pretty much what human and chimpanzee babies due to learn about emotions. Just like a child, the prototype Naos will show distress if their preferred caregiver doesn’t assist them when confronted by a stressful situation. Bots that need your care, wow. The idea of an emotionally vulnerable robot is getting people charged up. Check out Fox News’ clunky interview of an Aldebaran exec below.


Robots that learn like children? Yeah, we’ve seen those. But iCub is developing motor skills, Myon is learning language, and Diego-san…well, I think it’s being taught that life is cruel. In any case, the prototype Nao robots out of Hertfordshire are specifically aimed at gaining emotional intelligence and that sets this project apart. A large part of our brains are dedicated to recognizing, understanding, and predicting the emotions of others. If we can get a robot to develop a high emotional IQ we may have gone a long way towards making the machines decidedly more human. Not only that, but we’d be preparing ourselves for the eventual rise of better than human AI by getting these smart machines interested in our emotional well being. If we’re going to create robot overlords we should at least make them compassionate, shouldn’t we?

What can the Nao emotional prototypes actually feel?  They have preprogrammed emotional responses for anger, fear, sadness, happiness, excitement, and pride. Many robots have had such preprogrammed emotional displays in the past, but the Naos will be learning when and how to display these emotions from their human teachers. In essence, they will have preconceived notions of what an emotion is like, but are very open on learning how and when to apply it.

The work at the University of Hertfordshire, led by Lola Canamera, is part of a larger European initiative known as FEELIX Growing (FEEL, Interact, eXpress). FEELIX is aimed towards generating reasonable emotional bonds between humans and robots. The Nao prototype will also be part of ALIZ-E, a project that seeks to expand the periods of time over which the robot can be expected to interact and learn. In other words, the Naos will be taught how to develop their emotional understanding over their entire lifetimes. According to the University of Hertforshire News, this will enable them to serve as companions to children in hospital settings. In fact, ALIZ-E isn’t the only such application for the emotional Nao. University of Connecticut’s Center for Healht Intervention and Prevention (CHIP) will be using Nao to help autistic children learn about emotion and emotional displays. CHIP believes that the simplified emotional displays of robots, and their ability to be controlled, will make them more accessible to the young patients. So, not only will Nao be learning from us, he’ll be teaching us as well. Awesome.

As exciting as Nao’s emotions may be, I’m not terribly impressed quite yet. We’ve yet to see one of these sensitive Nao prototypes in action proving its emotional chops. Will it appear more aware of emotional stakes or will it simply be slightly less chaotic in its emotional displays than other robot toys of the past. As brilliant as it is to have robots learn emotions the same way that children do, we can’t know the true potential for the technique until we see results. We may only be teaching the Nao how to mimic a single person, not human motion in general. Though, come to think of it, that would still be a pretty cool accomplishment. Which person should we base the first robotic emotions on? My vote’s for Ben Stein. You know, make it easy in the beginning. We can throw them Meryl Streep later.

[image credit: University of Hertfordshire]

[source: Fox News, CHIP, University of Hertfordshire
]

Discussion — 14 Responses

  • Hollander August 18, 2010 on 11:44 pm

    I actually wouldn’t mind having emotionless robots giving the news. Than we will be sure of 2 things:
    1) Complete objective reports of news, without a shred of emotion to give away a sense of subjective or biased thought
    2) The robot newscasters wouldn’t be acting all moronic like that Fox News bozo and his idiotic smirk

  • Hollander August 18, 2010 on 7:44 pm

    I actually wouldn’t mind having emotionless robots giving the news. Than we will be sure of 2 things:
    1) Complete objective reports of news, without a shred of emotion to give away a sense of subjective or biased thought
    2) The robot newscasters wouldn’t be acting all moronic like that Fox News bozo and his idiotic smirk

  • ALi September 3, 2010 on 1:52 pm

    And what if the BRAIN of SU goes against its scientists ?

  • ALi September 3, 2010 on 9:52 am

    And what if the BRAIN of SU goes against its scientists ?

  • soulfulpath May 29, 2014 on 6:14 am

    What I was wondering during this article was do they actually, truly FEEL the emotions which are humanly appropriate to a stimulus, or are they just responding in the way we want them to? Sure, that’s emotional intelligence, or is it? Regular intelligence would prompt us to do the ‘right’ thing in a situation, but is it truly because we FEEL that it is the right thing to do? I have no previous background in technology and artificial intelligence, so it’s quite a difficult thing to try to grasp even at this moment. So basically, with robots we’ll be having a well behaved ‘person’ behaving how we want them to, and what is correct, but does it really mean anything? Do people who behave nicely, but never truly feel like their doing the right thing internally mean anything? Well, at the least I can say that ACTIONS do indeed matter. So apart from me being sad that this robot cannot truly feel what he/she is subjectively themselves to, everything else will be fine I guess. And the only reason I even feel sad for an object is because of my ability to empathize. Lol so I guess it doesn’t really matter outside of that perspective.