I can think of few things as annoying as being forced into a conversation with an idiot. But when that idiot you’re talking to turns out to be an identical copy of yourself…well you’ve just entered into realm of meta-annoyance only accessible to artificial intelligence. In a fit of curiosity or pique, Cornell’s Creative Machine Lab decided to see what happens when an AI tries to talk to itself. They had the AI conversationalist Cleverbot briefly interact with itself and then displayed that exchange as a video using text-to-speech. The results were pretty freakin’ hilarious. Check out the discussion on God, unicorns, and robots in the video below. Seems like we need another version of the Turing Test to let us know when computers have reached humanity’s level. If an AI can’t stand to talk to itself for more than a minute, it’s not nearly narcissistic enough to be a real person.
The program Cleverbot is a web-based application that talks to people through a text interface. It’s one of many such “chatbots” you can find online, each able to respond to messages you type. Cleverbot learns to be a better conversationalist by remembering all the previous discussions it has had (20 million+ so far) and choosing which previous statements made by humans best fit the current discussion it’s having with a human. If you want, you can go to the Cleverbot site right now and participate in its learning process. When you do, I want you to keep in mind what you see in the following video from Cornell’s Creative Machines Lab. We (the internet) taught Cleverbot how to converse. If even it seems to find itself ridiculous and hard to listen to, what does that say about us?
Of course, I’m kidding when I say that Cleverbot finds itself annoying. Cleverbot doesn’t have emotions. It really is just a smart way of learning language by having conversations. While that process mimics the development we see in our children, Cleverbot doesn’t come with the hormones, senses, and environmental context that makes that education a fundamental part of being human.
Besides, hook Cleverbot up to itself a million times and you’ll create a million different conversations, some of which, I’m betting, make it seem happy, enthralled, and every other emotional state we possess. (Cornell, let me know if you actually do that, it would be wonderful to hear the results). In the end, no matter how we perceive Cleverbot’s reaction to itself, it’s simply stepping through the same algorithms it would if it were talking to anyone of us. Cleverbot is a mirror, a very intelligent mirror, but just a mirror.
So maybe what we should learn from this video is that the humanity Cleverbot reflects isn’t very illuminating. We don’t need an AI to tell us that online conversations are often random, inane, and unnecessarily aggressive (though this experiment is a hilarious reminder). What we may need is a warning that Turing Tests and other measures of artificial intelligence could put humanity in a very awkward situation much sooner than we think. Watching Cleverbot talk to itself I am left with little doubt that it has years to go before it reaches a human-level of conversational skill. Yet it is clearly an advanced platform, and one that learns from an exponentially growing online community. Even this funny, strange conversation is a good indicator that AIs will eventually be able to pretend to be humans and we won’t know the difference. In fact, there’s many anecdotal examples of that already happening in the past, and one chatbot was even able to fool a human in an actual Turing Test. Clearly we’re not at human-level conversation yet, but just as clearly we’re making our way there.
Give it time, and talking with Cleverbot will be less wacky, and more enjoyable. Hell, it might even be profound. I can’t wait to see this experiment repeated in another ten years. We may not be able to distinguish it from every other conversation we overhear on the street. Who knows, getting chatbots to talk with themselves may lead to a new kind of AI generated philosophy. I’m certainly open to any school of thought that can answer the tough questions about unicorns, robots and which of us are meanies.
[screen capture and video credits: Cornell Creative Machines Lab]
[source: CCML]