The internet is a wealth of information. But you have to know what you’re looking for. If you only know a basic category—like that weird looking modern chair at your buddy’s office—not so good. Search modern chair and prepare to be bombarded with general definitions of chairs or the history of chairs or a mishmash of unlabeled chair pictures.
Really what you want are all the specific variations of chairs, browsable by image, so you can recognize your target with a quick scan. From there, you can go on to find out its history, inventor, price, and more.
A team of scientists from the University of Washington and the Allen Institute for Artificial Intelligence is developing a program that teaches itself all there is to know about a concept and presents the findings in pictures and phrases. The program is called Learning Everything About Anything, or LEVAN.
To study a concept, LEVAN scans millions of books and images online. It learns as many visual variations as it can. For example, when learning about the concept “horse” the algorithm would keep phrases like “jumping horse” or “eating horse” but discard non-visual phrases like “my horse” and “last horse.”
“It is all about discovering associations between textual and visual data,” said Ali Farhadi, a UW assistant professor of computer science and engineering. “The program learns to tightly couple rich sets of phrases with pixels in images. This means that it can recognize instances of specific concepts when it sees them.”
The software works great for concrete concepts like airplane or chair, less well for vaguer concepts like innovation (which turns up a large number of images of people talking, often in suits). The list of known concepts is finite but growing. Since its March launch, LEVAN has tagged over 13 million images with 65,000 phrases.
“Major information resources such as dictionaries and encyclopedias are moving toward the direction of showing users visual information because it is easier to comprehend and much faster to browse through concepts,” said Santosh Divvala, a research scientist at the Allen Institute and affiliate scientist at UW in computer science and engineering.
However, Divvala goes on to point out the approach is limited by the need for human workers to sort images. The best part about LEVAN is that once on the scent of a concept, it works the rest out all by itself. Though it currently takes awhile to finish its learning—12 hours of compute time per concept—that’s a vast improvement on manual human curation.
Learn more at Science Daily, “New computer program aims to teach itself everything about any visual concept,” or try out LEVAN for yourself here.
Image Credit: LEVAN/YouTube