Computer Learns Sign Language – Next Stop World Domination

Why does it have to take so long for the computers to rise up and kill us? The suspense is just maddening. Four Terminator movies and a television series spin-off and it still hasn’t happened in real life? Come on, these are supposed to be knowing, learning human-exterminators. Well, at least we can take solace in the fact that computers can indeed learn. A group of researchers from the University of Oxford in conjunction with the University of Leeds has created a computer program that can learn British sign language just by watching television.  The official paper describing their research can be found here.

Now before you jump out of your seat, running for the Soviet era bomb shelter that you lovingly reconstructed for the express purpose of a computer revolution in some sort of sick and twisted dystopian future, there are a few caveats that make this a bit less sinister. The television programming that the computer watches has not just the show, but subtitles as well as a human signing in the corner of the screen. The computer can track the hand and arm movements of the signer and can cross reference those motions against the words in the subtitles. Then, after watching many sequences, the computer can correlate the common signs to words. Voila, sign language is learned.

Now this brute force method of learning requires an impressive 10 hours of footage for the computer to analyze. The researchers set out to deduce the signs for 210 words, but only wound up getting 136 correctly, about 65 percent. Researchers are shrugging off the low percentage because of the contextual issues and complexities associated with sign language. They are simply ecstatic that they got those 136 right.

Now, many would look at this and blurt out the old adage about monkeys in a room writing Shakespeare. Even worse, it would take significantly less time for them to write 65 percent of Shakespeare. But the beauty is in the programming. Researchers needed to first pinpoint the location of the hand. This might not sound hard for the average human who can look at their hand and find something similarly fleshy and nubby on another human but, to a computer, the concept of a hand is a lot more foreign then the idea of 0 (which took humans quite a long time to work out). So the software first tracks the easier-to-find arms, as shown by the two boxes over the photo resembling some sort of real-life Terrance and Phillip mock-up. The next logical step is to go to the end of the arm and find the fleshy things. That’s a hand. Only then could the computer begin to analyze the sign language.

Researchers are hoping that this technology might make it easier to automate sign language and bring more television programming to the hearing impaired. Earlier attempts at using computers to sign have been dismissed as clunky and hard to follow, so this project’s close analysis of the language could possibly lead researchers to developing a more understandable automation. After that is perfected, who knows? Perhaps computers can finally decode all those hand signals that are prevalent in baseball and football so we, the viewers, can have a half-way decent idea of what is going on.

Andrew Kessel
Andrew Kessel
Andrew is a recent graduate of Northeastern University in Boston, MA with a Bachelor of Science in Chemical Engineering. While at Northeastern, he worked on a Department of Defense project intended to create a product that adsorbs and destroys toxic nerve agents and also worked as part of a consulting firm in the fields of battery technology, corrosion analysis, vehicle rollover analysis, and thermal phenomena. Andrew is currently enrolled in a Juris Doctorate program at Boston College School of Law.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured