Eyes are the Gatekeepers of the Uncanny Valley (video)

54 5 Loading
Eyes Are the Gatekeepers of the Uncanny Valley

Humans (left) were morphed with dolls to produce blended images (right). Turns out you need to be 2/3 parts human for people to think you're alive.

The poets and philosophers were right, the eyes really are the windows to the soul. Or at least what we perceive to be the soul. Researchers at Dartmouth College have studied how people respond to images of humans, dolls, and the morphed blending of the two to learn how realistic a face must be before we’ll consider it to be alive. Turns out that you have to be about two-thirds the way between human and doll (towards the human side) before people will stop thinking you’re an object and start treating you like a person. Not only did the Dartmouth scientists roughly determine the boundaries of the Uncanny Valley, they studied how subjects were making their decisions. Short answer: it’s all in the eyes. You can look at examples of the images that test subjects were asked to judge in the videos below. If you watch closely you can see a face go from caring to creepy to cute and back again all in a few seconds. Block out the eyes and it almost seems like you skip “creepy”. Cool.

The study was performed by Christine Looser, a Dartmouth PhD student, and her advisor Thalia Wheatley and appears in a recent issue of Psychological Science. Wheatley’s Lab examines a wide range of questions on how humans relate to one another. For this experiment, Looser took human faces and matched them to the faces of dolls that most closely resembled them. Then the two faces were morphed together with photo-altering software to produce a series of images that exist along the human-doll continuum. The following videos show two examples of what it would look like to scan across that continuum and back again. The subjects in the study, however, were shown still images.



Test subjects would look at a photo taken from the human-doll continuum and be asked if the picture shown was a doll or human. Two months later, the same subjects were asked to look at the same images and determine if the person shown had a mind. Looser and Wheatley found that images start to be perceived as human, and possessing a human mind, about 67% along the continuum. In other words, if a face gets any more doll-like than that, people will no longer believe it is alive.

A different experiment determined that test subjects looked longest at the image’s eyes, rather than the mouth, nose, or skin, before categorizing an image as human or doll. From that focus Wheatley and Looser suggest that the facial cues humans use to recognize living things are clustered around the eyes, though not necessarily on the eyeball itself. By studying those clues, humans are able to quickly determine if a face belongs to another person or if it’s simply something that looks like a face by accident. This discernning skill is what probably kept our evolutionary ancestors from talking to human-looking rocks while still being able to form bonds with their family members.

I always find this kind of psychological research fascinating, though I worry about certain limitations of the study. Let’s skip my general disapproval of using student volunteers as test subjects (ask me about this when you see me at a conference and we’ll discuss it for hours, trust me) and just focus on the breadth of the experiment. By design Looser and Wheatley were only interested in static visual cues. This ignores the importance of faces in motion (though they’ve looked at the human response to moving objects in other experiments), and doesn’t address audio input. A doll that appears 2/3 human is still going to be really creepy if it moves in a jerky way, or if it sounds like Stephen Hawking. Measuring when humans will accept a static image as a person is valuable data, but it’s only part of the understanding that we need.

Robotics engineers and digital artists struggle with creating realistic-looking artificial humans that viewers will respond to positively. Hollywood will typically make their characters cartoonish to avoid the Uncanny Valley, though recent attempts such as Avatar seemed to push the boundaries of what could be accomplished. With robots, the Uncanny Valley at times seems like an insurmountable obstacle – some of the life-like faces created seem almost acceptable while others will make your skin crawl for days.

Researchers like Looser and Wheatley have given us valuable insight into how we can create artificial people with which we can identify. However, there is still a lot of work to be done before we’ll be able to truly make robots and virtual characters appear more human than fake. Even then, it will take years before we can give them personalities that we will want to love. In the meantime, I think I’m going to work on creating an augmented reality iPhone program that removes the eyes from images of people. I’ll call it; “The Creepiest Freakin’ App You’ll Ever Own.” Should be a hot seller next October.

[image credits: Christine Looser/Dartmouth College]

[sources: Science News, Looser and Wheatley 2010 Psychological Science, Wheatley Lab]

Discussion — 5 Responses

  • jt foster January 10, 2011 on 10:07 pm

    Eyes are certainly important, but hair is also important for a totally different reason. Right now we just don’t have the computing power to render all 100,000 hairs on a person’s head individually, so we utilize shortcuts to make it look as good as it can. It still doesn’t quite hold up in most situations.

    • Joey1058 jt foster January 11, 2011 on 5:14 pm

      I can understand the issue with hair. But I just want to mention the effort that Pixar made when they were animating Sulley in “Monsters, Inc”. Maybe they pulled that off realistically because he was an alien character?

  • FreeJack2k2 January 11, 2011 on 12:19 am

    We are far closer to human-looking CGI when you’re talking about static images. It’s in the animation of those human-looking faces where we fall down. Even with facial motion-capture, animators still have not been able to properly replicate the interaction of skin and muscle, the tightening and loosening of wrinkles at key spots, little things like lips sticking together slightly when talking…there are just a litany of “little things” that, interacting and watching other human beings all day every day, we take for granted, expect, and recognize when they’re missing. This is why, as advanced as robotics may get, I don’t think we can ever expect a robot’s face to be truly convincing as a human being. Close, maybe…but you’ll never be fooled.

    • Sumatra FreeJack2k2 January 11, 2011 on 6:49 am

      I agree with FreeJack2k2 about animation. I played a lot of video games and, while in static images you can see a good face render, in animation you can see that eyes are not moving. In better animations you might see eyes moving but it feels like they don’t look an object they seem to follow. It feels like eyes look at distance while talking to a nearby person.

      But I see that this issue is becoming less and less obvious in every new game.

  • Joey1058 January 11, 2011 on 5:34 pm

    All the previous comments touch on good points. CG has come an amazingly long way in just twenty years. I’m still personally amazed at how eight bit graphics have morphed into things like this video of Kinect manipulation: http://www.youtube.com/watch?v=bQREhd9iT38

    But on the other hand, when we get freaked out with a face looking out of our screens, we just need to reign ourselves in by reminding ourselves that it’s just a program. How are we going to do that in physical reality when a bot we have to interact with on an almost daily basis looks at us? It might not happen in my lifetime, but the iPhone generation will definitely have to face these issues.