Universal Translators in the Next Few Years?

7,711 13 Loading

"No hablo espanol."

The universal translator is a sci-fi staple: Star Trek made it infamous. Star Wars had C3PO. Hitchhiker’s Guide had the babel fish. Stargate and Dr. Who both had some variation of a voice-to-voice translating device. In some ways, the future is already here: Google Translate can turn around a workable text translation almost instantly (automatically in Chrome), and it’s letting the multilingual web talk to itself. Word Lens will even translate text you see in real time as augmented reality on your smartphone. Text translation is all well and good, but when will the holy grail arrive? When will voice-to-voice translation become a reality? When can you finally toss your Rosetta Stone software?

Actually, it’s already here – it’s just not as smooth as you might have hoped (yet). All the basic pieces of software necessary to a universal translator have already arrived: speech recognition (voice-to-text), language translation (text-to-text), and speech synthesis (text-to-voice). In fact, it’s already being employed in a number of sectors using current technology. Granted, the process is pretty clunky, but it’s here and it works.

The Army has been using a system developed by DARPA under the Spoken Language Communication and Translation System for Tactical Use (TRANSTAC) to help soldier speak in foreign countries. One such system, IraqComm, was developed in conjunction with SRI International and translates back and forth between English and colloquial Iraqi Arabic. Check out the system in action:

I also found this video of Ray Kurzweil demoing a basic version of a translator a few years ago:

From these videos alone, you can get a good idea of what needs to improve. First, the translation process isn’t nearly fast enough to hold a fluid, natural conversation. This particular hurdle shouldn’t be a difficult one to overcome; faster computers will be able to run the recognition, translation, and synthesis algorithms much more fluidly. However, there is an upper limit to the speed that these systems could acquire: the recognition software needs to hear most of the sentence before it can pass a text copy on to the translator. Languages don’t correspond to one another word-for-word, so a real-time translator isn’t really possible. At best, we should expect news-correspondent delays.

Second, the translation is a bit rough, and doesn’t always catch the finer points of what was said. Again, this is technology that has been improving over time (and nowadays, text translators can almost always capture the general idea in their translation). The newer era of translators – Google Translate included – are a significant leap from the generations that came before them. It wouldn’t surprise me to see the newer algorithms picking up slang, idioms, etc. as they refine their algorithms.

Finally, the automated voice sounds mechanical and awkward. I find this to be true of all the speech software I’ve encountered, and it tends to bother me (however, I have friends who listen to PDFs this way and don’t mind it). Certainly this kind of software is improving as well, but I have yet to hear speech software that sounded completely natural. This might actually be the last hurdle to be overcome. It reminds me of how you can’t lock eyes with someone over a webcam because the cameras aren’t behind the monitor: we’re always looking slightly to the side. The ideal speech translator would reproduce your own voice, as if you spoke that language, but needless to say this is a long way off. There might also be an uncanny valley along the way.

These three pieces are now being integrated more seamlessly, and the hardware is already here to support the improving software. Imagine using your smartphone and a Bluetooth to translate in real time in a foreign country. I doubt it’ll be absolutely perfect in the foreseeable future, but there are already some early versions coming. Earlier this year Google told The Times it was working on such a package, and hopes to have something that will “work reasonably in a few years time.”

We can add one more job to the robot-replacement endangered list: translators.

Discussion — 13 Responses

  • Gregory December 29, 2010 on 4:47 pm

    At IFA 2010, Google has already demo-ed real-time (though not simultaneous) speech translation.

  • Cmleite December 29, 2010 on 5:14 pm

    You forget to mention the REAL big issue: CONTEXT analysis; without context, the translator can’t decide on the meaning of a word.

  • Ascendant December 29, 2010 on 6:50 pm

    Translating basic conversation and short sentences can indeed be done to some extent by programs like Google Translate, but some languages are far out of their reach. Google almost always renders Japanese, for example, as gibberish. From my experience learning Japanese, I can see that the problem is that non-Romance languages (and even those to some degree) do not correspond one-to-one with English. The process of thought and connotation is completely different in some languages than it is in English. Extremely rudimentary translation of such languages is possible in the near future, but nuanced translation, translation with context, or translation with any kind of subtlety will, I suspect, have to wait for human-level intelligence. The best translations require understanding of both languages, cultures, and systems of para-language, in addition to knowledge of connotation and subtlety.

    In regions other than speech, I think translation can sometimes take even more work and intelligence. I don’t think a book or paper could be translated well by anything less than a human level intelligence either.

  • Anonymous December 29, 2010 on 7:40 pm

    I don’t know how much you have used Chrome but it’s ability to “translate” is questionable and personally I find it an insult to the hard working people who actually translate.

    • Tom Mornini December 30, 2010 on 9:06 am

      It’s an insult to translators the way robot welders were an insult to Detroit auto workers.

      Today, cars are far higher quality, and the auto workers are unemployed.

      An insult is personal, the advance of technology isn’t.

      • Ascendant Tom Mornini December 30, 2010 on 7:55 pm

        I don’t think that the issue is that translation technology is an insult to human translators. Machine translators will eventually become as good as humans at the task. But rather, saying that current or near-future machine translators are anywhere as good as human translators is an insult to the latter.

    • Fullerjamin January 4, 2011 on 10:51 am

      I work in the Import/export business for a Japanese company with an office based in London. I regularly use Google to translate emails from Japanese to English. The translation isn’t perfect, but I haven’t come across an email yet that I couldn’t understand. This technology never fails to amaze me.

  • Adam Computes December 31, 2010 on 11:32 pm

    I work for tazti Speech Recognition by Voice Tech Group. And the amount of improvement in speech recognition engines over the past 5 years has been remarkable. We want to invite everyone to try out our free speech recognition software that includes an API for speech recognition so you can mashup tazti to websites, or run apps like photoshop via speech, even a command line capability. As well you can play PC games by talking to your PC. The software includes a lot of features such as dictation and voice search that remain free to use forever and a few features that become premium content after a 15 day trial. We just rolled out this new version so we would appreciate some beta user feedback. http://www.tazti.com

  • LetsGoViral January 2, 2011 on 12:28 pm

    Nothing still beats being able to speak the specific language. Also, I’d like to see how they translate such languages as Chinese, Japaneses and Korean.

  • Richard Graham January 19, 2011 on 12:07 pm

    A far simpler solution was proposed, in fact, “guaranteed,” over a hundred 40 years or so ago. The teaching of a Universal Auxiliary Language planet wide. You would learn two languages from the beginning, never to be lost in idiom, context or meaning. Adopting a UAL would free all these folks for more important and urgent tasks!

    • narsey Richard Graham March 20, 2013 on 5:23 am

      “never to be lost in idiom, context or meaning”?

      Even if people learn this “UAL”, people will augment and change the language to reflect culture. Then you’re back at square one. See the divergence of Portuguese between the continental variety and Brazil, or feel okay not knowing why one of the varieties will laugh when you call a football jersey a camisola!

  • SrixonValdez January 19, 2011 on 6:28 pm

    I’d like to see them translate the african languages where people ‘cluck’ ‘whistle’ or make other noises to communicate. I’ve always wanted to know what they are saying

  • narsey March 20, 2013 on 5:16 am

    This article may understand some computer science, but I it’s extremely clear that the author doesn’t understand how translators do their work. Translators don’t translate linguistics. They translate culture, which is what language truly embeds. Culture shifts too quickly and depends too much on a population sample to ever be completely and 100% accurately translated by a machine, no matter how fast our computers get (there’s always an upper bound on the ever expanding volume of data that can be efficiently evaluated). Translation careers aren’t in danger at all. They will merely change to incorporate the new technology and improve already existing assisted translation technology. Many popular science articles like this were written in the late 70s and early 80s that surmised the end of the musician with the advent of synths and computer generated music. How wrong they were.