If you’re like me then you want to destroy most of the computer phone answering services you encounter. No, I didn’t say I wanted to “walk a puma”, I said I wanted to “talk to a human,” you insipid AI! Well, get ready to lose some of your rage, because I’ve found a phone answering computer that doesn’t trigger my machine-killing instincts. Developed by Smart Action, they’re known as Virtual Agents, and they’re engineered using research into artificial general intelligence. Listen to one in action in the clip below – you may be surprised by how little you hate it.
Peter Voss and his company Adaptive AI (A2I2) spent the early part of the 21st century pursuing artificial general intelligence (AGI) – the most powerful and broadly defined category of computer thinking. Now their spin-off company, Smart Action, uses that AGI research to create Virtual Agents: narrow (limited) AIs capable of providing dynamic and variable response during phone calls. I was able to talk with Peter Voss and get the inside scoop on the importance of Virtual Agents and understand how they might help us achieve true artificial general intelligence.
Smart Action’s Virtual Agents sound the same as many other disembodied voices you’ve heard on the phone, but their capabilities are much better. The VAs have both short term and long term memory – able to recall earlier parts of the current discussion as well as previous calls you have made. They can generate multiple hypotheses for any given input and acquire most of their skills from training, not programming. Put together this means that the VA can participate in a more human-like conversation, which allows for some precise applications: UC San Diego (working for the CDC) is using a Virtual Agent to track illnesses at elementary schools by having parents call in and describe symptoms. As powerful as the VAs may be, they still sound very robotic, as you’ll hear in the demo.
This clip is from a scripted example (on the human side) but you can still hear how the Virtual Agent uses context clues and ongoing input to provide a tailored response to the caller.
There’s a big difference between AGI and narrow AI. Want a computer that can be your friend? That’s AGI. Want a computer to optimize traffic lights? That’s a narrow AI. There are many examples of narrow AI in the world already, and having a narrow AI answer a phone isn’t all that extraordinary. What is unusual is how the Smart Action Virtual Agents came into being. Between 2001 and 2007 (rough dates here) Adaptive AI was in stealth mode, quietly working on building an AGI. Peter Voss says that one of the applications of that work was creating a personal assistant. That program, while not robust enough to turn into a commercial product, could learn from its users to perform simple tasks – buy tickets, complete repetitive chores, that sort of thing. Adaptive AI took that prototype personal assistant removed about two thirds of its capabilities and turned it into the VA system. Rather than build a narrow AI from the ground up, Voss and his team had taken a (primitive) AGI and stripped it down.
Which is why Smart Action can provide Virtual Agents at low cost. They are adapting one AGI program into each narrow AI application, rather than try to build a new AI system for every client. This permits Smart Action the luxury of not charging a client for new development – essentially a customer can start a VA system with zero investment. Smart Action charges clients by the minute of use, and this turns out to be about 15-25% of the costs of a traditional human call center. Seems like a pretty good business model to me. Smart Action has really only been on the scene for a year or so, but they have around 20 clients, half still in development. According to Voss, their oldest client has been running live for about 9 months. The funding is good – about $10 million from private and angel investors – so it looks like Smart Action and Adaptive AI will be expanding in the future.
Which is exciting for Voss, but honestly, I’m less interested in Smart Action as a company than as a research engine. You see, when a Virtual Agent is working in the real world, it has to adapt to new situations. That’s part of what makes it an AI (narrow or otherwise). Innovations made by the VA can then be incorporated back into Adaptive AI’s core AGI research. Likewise, developments in the AGI research can be used to update the VAs. What you have here is a developmental loop, and that speaks to the possibility of accelerated growth. Eventually Voss wants to automate the process of updating Virtual Agents from AGI research and vice versa (right now such improvements are performed manually). According to Voss, the speech recognition and telephony parts of the VAs are from an external company, so don’t look for breakthroughs there. Instead, the VA-AGI loop is likely to focus on innovations in goal orientation, contextual thinking, and decision making. These are all important parts of getting computers to become true artificial intelligences.
If the jump from a phone call answering service to a true AGI seems like an incredible leap, it may help to know who we’re dealing with here. Peter Voss is a well known figure in AI cirlces. He’s spoken at the Singularity Summit in 2007 as well as a BIL Conference last year. He also regularly writes essays about the future and AGI on his webpage. While he admits that his AGI research has largely been on hold for the past few years as Smart Action came up to speed, Voss hopes that additional funding will allow Adaptive AI to separate out its personnel again and get back to the team’s original focus. The path to true artificial general intelligence is likely to be a winding one. It’s unclear how long it will take to create (or raise?) learning machines capable of passing a Turing Test and interact with us on a social level. Still, there’s little doubt in my mind that we will eventually have computers that equal and even exceed the human brain. And they’ll be doing a lot more than just taking our calls