Interacting with modern-day Alexa, Siri, and other chatterbots can be fun, but as personal assistants, these chatterbots can seem a little impersonal. What if, instead of asking them to turn the lights off, you were asking them how to mend a broken heart? New research from Japanese company NTT Resonant is attempting to make this a reality.
It can be a frustrating experience, as the scientists who’ve worked on AI and language in the last 60 years can attest.
Nowadays, we have algorithms that can transcribe most of human speech, natural language processors that can answer some fairly complicated questions, and twitter-bots that can be programmed to produce what seems like coherent English. Nevertheless, when they interact with actual humans, it is readily apparent that AIs don’t truly understand us. They can memorize a string of definitions of words, for example, but they might be unable to rephrase a sentence or explain what it means: total recall, zero comprehension.
Advances like Stanford’s Sentiment Analysis attempt to add context to the strings of characters, in the form of the emotional implications of the word. But it’s not fool-proof, and few AIs can provide what you might call emotionally appropriate responses.
The real question is whether neural networks need to understand us to be useful. Their flexible structure, which allows them to be trained on a vast array of initial data, can produce some astonishing, uncanny-valley-like results.
Andrej Karpathy’s blog post, The Unreasonable Effectiveness of Neural Networks, pointed out that even a character-based neural net can produce responses that seem very realistic. The layers of neurons in the net are only associating individual letters with each other, statistically—they can perhaps “remember” a word’s worth of context—yet, as Karpathy showed, such a network can produce realistic-sounding (if incoherent) Shakespearean dialogue. It is learning both the rules of English and the Bard’s style from its works: far more sophisticated than an infinite number of monkeys on an infinite number of typewriters (I used the same neural network on my own writing and on the tweets of Donald Trump).
The questions AIs typically answer—about bus schedules, or movie reviews, say—are called “factoid” questions; the answer you want is pure information, with no emotional or opinionated content.
But researchers in Japan have developed an AI that can dispense relationship and dating advice, a kind of cyber-agony aunt or virtual advice columnist. It’s called “Oshi-El.” They trained the machine on hundreds of thousands of pages of a web forum where people ask for and give love advice.
“Most chatbots today are only able to give you very short answers, and mainly just for factual questions,” says Makoto Nakatsuji at NTT Resonant. “Questions about love, especially in Japan, can often be a page long and complicated. They include a lot of context like family or school, which makes it hard to generate long and satisfying answers.”
The key insight they used to guide the neural net is that people are actually often expecting fairly generic advice: “It begins with a sympathy sentence (e.g. “You are struggling too.”), next it states a conclusion sentence (e.g. “I think you should make a declaration of love to her as soon as possible.”), then it supplements the conclusion with a supplemental sentence (e.g. “If you are too late, she maybe fall in love with someone else.”), and finally it ends with an encouragement sentence (e.g. “Good luck!”).”
Sympathy, suggestion, supplemental evidence, encouragement. Can we really boil down the perfect shoulder to cry on to such a simple formula?
“I can see this is a difficult time for you. I understand your feelings,” says Oshi-El in response to a 30-year-old woman. “I think the younger one has some feelings for you. He opened up himself to you and it sounds like the situation is not bad. If he doesn’t want to have a relationship with you, he would turn down your approach. I support your happiness. Keep it going!”
Oshi-El’s job is perhaps made simpler by the fact that many people ask similar questions about their love lives. One such question is, “Will a distance relationship ruin love?” Oshi-El’s advice? “Distance cannot ruin true love” and the supplemental “Distance certainly tests your love.” So AI could easily appear to be far more intelligent than it is, simply by identifying keywords in the question and associating them with appropriate, generic responses. If that sounds unimpressive, though, just ask yourself: when my friends ask me for advice, do I do anything different?
In AI today, we are exploring the limits of what can be achieved without a real, conceptual understanding.
Algorithms seek to maximize functions—whether that’s by matching their output to the training data, in the case of these neural nets, or perhaps by playing the optimal moves at chess or AlphaGo. It has turned out, of course, that computers can far out-calculate us while having no concept of what a number is: they can out-play us at chess without understanding a “piece” beyond the mathematical rules that define it. It may well be that a far greater fraction of what makes us human can be abstracted away into mathematics and pattern-recognition than we’d like to believe.
The responses from Oshi-El are still a little generic and robotic, but the potential of training such a machine on millions of relationship stories and comforting words is tantalizing. The idea behind Oshi-El hints at an uncomfortable question that underlies so much of AI development, with us since the very beginning. How much of what we consider fundamentally human can actually be reduced to algorithms, or learned by a machine?
Someday, the AI agony aunt could dispense advice that’s more accurate—and more comforting—than many people can give. Will it still ring hollow then?