Open Glass Overlays Digital Information on the Real World With Google Glass

SH 184_#3 BIG

Google Glass is augmented reality—and then again, it isn’t. True augmented reality would take in your view and attach applicable information to any object you see, maybe the name of the person you’re currently talking to and a few key points from their Facebook bio. Or the prices of the pastries you’re perusing at the local café.

Think Terminator or Tony Stark’s Iron Man suit.

As you might expect, augmented reality is a tough nut to crack. It requires a rich database of location specific information and software trained to recognize scenes from images or streaming video and capable of sending that information back to the user—all this so fast the user notices minimal lag.

While Google Glass doesn’t come stock with augmented reality, Brandyn White and Andrew Miller’s Open Glass have a rudimentary prototype. It works like this:  a user—perhaps, in the future, someone akin to a driver of a Google Maps car, or a member of the crowd—catalogues various scenes and uploads the information to a database for future use.

Now, a casual user happens upon the same scene and is curious about what’s in front of him. If his or her smartglasses are pointing in the right direction and there’s a match, the program sends appropriate information back to them, where it is either overlaid on the scene in front of them or whispered in their ear.

The Open Glass approach is clearly still in the early stages. Latency is significant, the information overlays are simple sketches, and the field of view still limited. (To expand field of view, the team is experimenting with two pairs of glasses instead of one.)

Of course, it could take a little time to perfect the technique and more to map and annotate our environment. But as the approach is fleshed out—Open Glass posted their code to Github—something like this might begin creeping into the real world.

Future retail stores could use the technology to price items, promote sales, and answer questions. Or drivers wearing smartglasses might see virtual traffic signs and information like “Slow down. Accident ahead. Estimated delay, fifteen minutes.”

But the most powerful early uses of augmented reality may be for people with disabilities. The visually impaired, for example, could benefit from whispered descriptions of various items in a room or directions to avoid obstacles walking down the road—like digital braille and a virtual guide dog.

Image Credit: Open Glass/YouTube

Jason Dorrier
Jason Dorrier
Jason is editorial director of Singularity Hub. He researched and wrote about finance and economics before moving on to science and technology. He's curious about pretty much everything, but especially loves learning about and sharing big ideas and advances in artificial intelligence, computing, robotics, biotech, neuroscience, and space.
Don't miss a trend
Get Hub delivered to your inbox