The Uncanniest Valley: What Happens When Robots Know Us Better Than We Know Ourselves?

77 12 Loading

uncanny_valleyedThe “uncanny valley” is a term coined by Japanese roboticist Mashahiro Mori in 1970 to describe the strange fact that, as robots become more human-like, we relate to them better—but only to a point. The ”uncanny valley” is this point.

The issue is that, as robots start to approach true human mimicry, when they look and move almost, but not exactly, like a real human, real humans react with a deep and violent sense of revulsion.

This is evolution at work. Biologically, revulsion is a subset of disgust, one of our most fundamental emotions and the by-product of evolution’s early need to prevent an organism from eating foods that could harm that organism. Since survival is at stake, disgust functions less like a normal emotion and more like a phobia—a nearly unshakable hard-wired reaction.

Psychologist Paul Ekman discovered that disgust, alongside contempt, surprise, fear, joy, and sadness, is one of the six universally recognized emotions. But the deepness of this emotion (meaning its incredibly long and critically important evolutionary history) is why Ekman also discovered that in marriages, once one partner starts feeling disgust for the other, the result is almost always divorce.

Why? Because once disgust shows up the brain of the disgust-feeler starts processing the other person (i.e. the disgust trigger) as a toxin. Not only does this bring on an unshakable sense of revulsion (i.e. get me the hell away from this toxic thing response), it de-humanizes the other person, making it much harder for the disgust-feeler to feel empathy. Both spell doom for relationships.

Now, disgust comes in a three flavors. Pathogenic disgust refers to what happens when we encounter infectious microorganisms; moral disgust pertains to social transgressions like lying, cheating, stealing, raping, killing; and sexual disgust emerges from our desire to avoid procreating with “biologically costly mates.” And it is both sexual disgust and pathogenic that creates the uncanny valley.

To protect us from biologically costly mates, the brain’s pattern recognition has a hair-trigger mechanism for recognizing signs of low-fertility and ill-health. Something that acts almost human but not quite, reads—to our brain’s pattern recognition system—as illness.

And this is exactly what goes wrong with robots. When the brain detects human-like features—that is, when we recognize a member of our own species—we tend to pay more attention. But when those features don’t exactly add up to human, we read this as a sign of disease—meaning the close but no cigar robot reads as a costly mate and a toxic substance and our reaction is deep disgust.

uncanny_valley

Repliee Q2. Taken at Index Osaka Note: The model of Repliee Q2 is probably same as Repliee Q1expo, Ayako Fujii, announcer of NHK.

But the uncanny valley is only the first step in what will soon be a much more peculiar progress, one that will fundamentally reshape our consciousness. To explore this process, I want to introduce a downstream extension of this principle—call it the uncanniest valley.

The idea here is complicated, but it starts with the very simple fact that every species knows (and I’m using this word to describe both cognitive awareness and genetic awareness) its own species the best. This knowledge base is what philosopher Thomas Nagel explored in his classic paper on consciousness: ”What Is It Like to Be A Bat.” In this essay, Nagel argues that you can’t ever really understand the consciousness of another species (that is, what it’s like to be a bat) because each species’ perceptual systems are hyper-tuned and hyper-sensitive to its own sensory inputs and experiences. In other words, in the same way that “game recognizes game,” (to borrow a phrase from LL Cool J), species recognize species.

And this brings us to Ellie, the world’s first robo-shrink. Funded by DARPA and developed by researchers at USC’s Institute for Creative Studies, Ellie is an early iteration computer simulated psychologist, a bit of complicated software designed to identify signals of depression and other mental health problems through an assortment of real-time sensors (she was developed to help treat PTSD in soldiers and hopefully decrease the incredibly high rate of military suicides) .

At a technological level, Ellie combines a video camera to track facial expressions, a Microsoft Kinect movement sensor to track gestures and jerks, and a microphone to capture inflection and tone. At a psychological level, Ellie evolved from the suspicion that our twitches and twerks and tones reveal much more about our inner state than our words (thus Ellie tracks 60 different “features”—that’s everything from voice pitch to eye gaze to head tilt). As USC psychologist and one of the leads on the project, Albert Rizzo told NPR: [P]eople are in a constant state of impression management. They’ve got their true self and the self that they want to project to the world. And we know that the body displays things that sometimes people try to keep contained.”

6818431732_16c8be42ae_mMore recently, a new study just found that patients are much more willing to open up to a robot shrink than a human shrink. Here’s how Neuroscience News explained it: ”The mere belief that participants were interacting with only a computer made them more open and honest, researchers found, even when the virtual human asked personal questions such as, ‘What’s something you feel guilty about?’ or ‘Tell me about an event, or something that you wish you could erase from your memory.’ In addition, video analysis of the study subjects’ facial expressions showed that they were also more likely to show more intense signs of sadness — perhaps the most vulnerable of expressions — when they thought only pixels were present.

The reason for this success is pretty straightforward. Robots don’t judge. Humans do.

But this development also tells us a few things about our near future. First, while most people are now aware of the fact that robots are going to steal a ton of jobs in the next 20 years, the jobs that most people think are vulnerable are of the blue-collar variety. Ellie is one reason to disavow yourself of this notion.

As a result of this coming replacement, two major issues are soon to arise. The first is economic. There are about 607,000 social workers in America, 93,000 practicing psychologists, and roughly 50,000 psychiatrists. But, well, with Ellie 2.0 in the pipeline, not for long. (It’s also worth noting that these professions generate about $3.5 billion dollars in annual income, which—assuming robo-therapy is much, much cheaper than human-therapy—will also vanish from the economy.)

But the second issue is philosophical, and this is where the uncanniest valley comes back into the picture. Now, for sure, this particular valley is still hypothetical, and thus based on a few assumptions. So let’s drill down a bit.

The first assumption is that social workers, psychologist and psychiatrists are a deep knowledge base, arguably one of our greatest repositories of “about human” information.

Second, we can also assume that Ellie is going to get better and better and better over time—no great stretch since we know all the technologies that combine to make robo-psychologists possible are, as was well-documented in Abundance, accelerating on exponential growth curves. This means that sooner or later, in the psychological version of the Tricorder, we’re going to have an AI that knows us as well as we know ourselves.

Third—and also as a result of this technological acceleration—we can also assume there will soon come a time when an AI can train up a robo-therapist better than a human can—again, no great stretch because all we’re really talking about is access to a huge database of psychological data combined with ultra-accurate pattern recognition, two already possible developments.

But here’s the thing—when you add this up, what you start to realize is that sooner or later robots will know us better than we know ourselves. In Nagel’s terms, we will no longer be the species that understands our species the best. This is the Uncanniest Valley.

And just as the uncanny valley produces disgust, I’m betting that the uncanniest valley produces a nearly unstoppable fear reaction—a brand new kind of mortal terror, the downstream result of what happens when self loses its evolutionarily unparalleled understanding of self.

Perhaps this will be temporary. It’s not hard to imagine that our journey to this valley will be fortuitous. For certain, the better we know ourselves—and it doesn’t really matter where that knowledge comes from—the better we can care for and optimize ourselves.

Yet I think the fear-response produced by this uncanniest valley will have a similar effect to disgust in relationships—that is, this fear will be extremely hard to shake.

But even if I’m wrong, one this for certain, we’re heading to an inflection point almost with an equal—the point in time when we lose a lot more of ourselves, literally, to technology and another reason that life in the 21st century is about to get a lot more Blade Runner.

More human than human? You betcha. Stay tuned.

[Photo credits: Robert Couse-Baker/Flickr, Wikipedia, Steve Jurvetson/Flickr]

Discussion — 12 Responses

  • DarkRoseAM July 29, 2014 on 10:14 am

    I think there’s a difference between understanding something, and understand what it’s like to be something. Which is why you bring up Nagel’s bat essay. Human’s might understand more about the lives of bats, but we’ll never understand what it’s like to actually be a bat. In the same way, robots might understand a lot about the lives of humans. However, they won’t know what it’s like to be human, any more than we’ll know what it’s like to be a bat.

    • Matthew DarkRoseAM July 31, 2014 on 8:11 am

      while you make a good point, …it will understand us.

      check this out:
      http://singularityhub.com/2014/01/09/its-alive-artificial-life-worm-wiggles-on-its-own/

      ^ open source, supercomputer simulation of a nematode. something tells me it knows what it’s like to be a worm (check out the video, it’s definitely wiggling). albeit a very lonely worm in a computational abyss of virtual 3d space.

      we don’t have the technology to simulate a human yet, but in 15 years we will. and then in 10 years beyond that we could simulate every molecule of 7 billion of us, and observe millions of years of its evolution in compressed time and play any part of it’s 100 million year evolution with 100% accuracy on demand. maybe we already live in the matrix. that’s what some physicists think–the universe is digital. personally, I feel like it’s inconsequential what’s digital or analogue if reality proliferates infinitely inside and outside of everything. (for example, our entire universe could be a single electron flying through a microchip in a very large computer). all that matters its that we maintain our integrity, whatever we decide that is through ‘coherent extrapolated volition’ with our powerful AGI friends (not overlords). and then decide where we want to go and what we want to do in infinite possible realities.

  • Cauri Jaye July 29, 2014 on 10:31 am

    So many great ideas and quotes here: joblessness; the fear trigger when robots know us better than ourselves; why we preen and celebrate good looks; and more.

    I only missed the exploration of how the integration of man and AI will affect this.

  • Telcomcorp July 29, 2014 on 1:35 pm

    very interesting article covering the interaction of sapien/human psychology with machines made to look and act like us.

    Whats the difference between a robot and an android again ?
    we even have a OS named such that runs some computer devices but we’re not up to actual autonomous androids yet….

  • Rustbucket July 30, 2014 on 3:50 pm

    When robots have a high capability to read humans (or people think they do) these capabilities will be utilized in not so nice ways, such as making decisions regarding hiring and firing, or law enforcement or national security assessments.

    • Matthew Rustbucket July 31, 2014 on 8:18 am

      you are making linear predictions based on your cross section of experiential existence in this space/time. we exist in a tiny snapshot of the human experience that is pretty peculiar and unique and uninstinctive. we no longer automatically hunt/gather kill/rape/eat everything possible to self sustain. we automate everything and in the future we will return to a more impulsive and instinctive (yet humane, ecologically sustainable, and enlightened) existence. not a bunch of papers and beauracracy. in fact, we’re already half way there. homocides, crime, disease, hate crimes, economic disparity are all on a sharp decline (despite how it seems in the media, that’s not the most of what there is). and we live 3x longer than a “normal” human lifespan of 30 years. and we oursource our nervous system to smartphones–augmented intelligence. welcome to the technological singularity. survive this world another 15 or 20 years and you will be a god. (assuming we get our ecological act together and make it that far without the 1% wiping us out. it’s pretty all or nothing).

  • Matthew July 31, 2014 on 6:09 am

    yep, it will produce fear. OR an accelerating, constantly evolving, optimized solution to every conceivable human rights and ecological problem we can imagine. listen… if the NSA and robo-psychological brain control means that we can stop seeing massacres/children getting blown up at schools/schopping malls/movie theatres every year/month/week/day-than bring on the mind control. they’re there to protect us. the standard of living and human rights is better than ever. yet we mass produce legal weapons and we still act surprised when a shooting happens…? we have exponentially growing GDP yet we section out poor people and minorities to foot the bill for taxes while corporations who have disenfranchised us get to use all of our infrastructure roads/bridges/ships/planes and often pay no taxes while leeching billions out of circulation in this new oligarchical plutocracy masquerading as capitalism. capital must circulate. minimum wage needs to be corrected (not raised) to reflect inflation, and increased productivity from the most highly educated and hardest working generation ever (not lazy entitlement generation). guess who will benifit the most? THE RICH. so which of them will end this neglect and genocide on the poor and arise as humanity’s benevolent benefactor? time to utilize our tools for good and raise the bar to create a better world we all know is possible. the only thing to fear is the past… the good old days weren’t so good.

    • Ian Kidd Matthew July 31, 2014 on 10:26 pm

      “They’re there to protect us”. Jesus, the naivety.

    • Telcomcorp Matthew August 1, 2014 on 11:49 am

      ah Matthew you sound such a optimist ,however I share many of your hopes and dreams ,we have the technological capability to eliminate
      all poverty and thus 98 percent of illness ,its the pesky crime thing born of aggression and ignorance that saps the will.
      I foresee or hope for cybernetic integration of our computer technologies image a noninvasive brain computer interface as pervasive as
      current portable pc,s I mean smartphones! Imagine our current phenotypes and physiology thoroughly comprehended and malleable limited only by our creativity.

      I already experience access to data and devices I didn’t even dream was possible and I was and still am a sifi fan,but even now the overwhelming burdens of poverty illness and crime make even the dream of solution unattainable ,progress not withstanding these are ,have been and continue to be for the foreseeable future the bad old days.

  • LC August 3, 2014 on 12:08 pm

    The themes in this piece may be based on a dichotomy that will be crumbling into falsehood in coming decades: the human-versus-machine dichotomy. A theme here is “the humans versus the robots, and the fear within the former of the growing abilities of the latter”. But a big assumption underlying this theme, which may turn out to be entirely unfounded, is that humans and robots are going to remain separate categories. By the time machines understand us better than the best psychologists now do, we will probably be capable of running at least some kinds of useful interfaces between brain and computer (https://en.wikipedia.org/wiki/Brain%E2%80%93computer_interface). These already exist in primitive form, as primate brains can now control prosthetic arms to some degree. If they develop further, just like all IT is developing further, then maybe if a machine understands something, augmented human brains will have access to that understanding as well — at least its practical results (such as conflict resolution) if not its deepest grokking (which is over our heads). Maybe it will become possible to port an empathy app to brains that lack endogenous empathy or have only shitty rudiments of it (and you may be talking about a quarter of all humans there, or more). Such IT developments could turn out to be a good thing, as they may be the only thing that saves humans from wiping themselves out with bad decisions (such as nuclear war and environmental destruction). Many humans don’t have what it takes, neurally, to avoid abusing, killing, and hating each other. So augmenting their puny brains with brain-machine interfaces may actually be an improvement. Of course, how do you keep it from being misused by Big Brother (turning most of us into cyborg sheep)? That’s an extremely difficult question. But we may not have any choice but to attempt to answer it, as the IT is probably coming whether we want it to or not.