Personal drones are all the rage. Though hobbyist RC helicopters and planes have been around for years, today’s multi-rotor vehicles are easier to pilot thanks to GPS and a suite of onboard chips and sensors—that is, they’re self-stabilizing, leaving the pilot the fun and relatively simple job of choosing which general direction they follow.
Autonomous as today’s consumer drones are, however, they could be better if they used computer vision to better “see” their surroundings. Computer vision makes a model of the system’s immediate environment with cameras and infrared sensors.
A familiar example is Microsoft Kinect which translates gamers’ physical movements into a video game. But such technology isn’t ideal for drones. Because it’s intended to be stationary, Kinect can afford to be bulky. And it is.
But computer vision hardware is getting more compact. So compact, in fact, that this year Google’s Project Tango crammed all the necessary components into a (moderately hefty) Android smartphone. Tango uses a depth sensor, motion-sensing camera, and two Movidius computer vision processors to construct a 3D model of its surrounding from a quarter million measurements per second.
Tango may signal a shift toward more widespread use of computer vision in drones and, more generally in robots. For example, University of Pennsylvania professor, Vijay Kumar, recently gave computer vision to a Parrot AR Drone merely by strapping a Tango smartphone to it and hooking the two systems together.
In the video, one of Kumar’s PhD students, Giuseppe Loianno, pushes the drone around to show how it returns autonomously to its previous position. Later, he sends it on a trajectory and again pushes it around.
The drone consistently returns to course after each shove.
Why is this cool? The drone isn’t being controlled by the computer at all. In fact, the laptop only sends a flight path. The drone flies along autonomously—using the brains and sensors of its onboard Tango smartphone to correct course as necessary. Though the video doesn’t show the drone “seeing” and dodging obstacles, such dynamic flight is theoretically possible.
That said, the technology isn’t perfect yet. The system is doing a lot of work using a small battery and limited processing power. Those experimenting with Tango report short battery life, limited graphics processing power, and overheating. Kumar notes he can only fly the Tango drone for five minutes or so. After a point, the system overloads and shuts itself down.
But here’s the thing. We’re talking about a relatively cheap, widely available drone and a smartphone prototype Google hopes to commercialize. And Kumar says getting the two devices to work together took hours of programming, not weeks.
Tango’s potential only grows as Google refines the tech. Though the system is cool in smartphones—used for augmented reality, indoor mapping, custom tailoring—it’s the effect of affordable, miniaturized computer vision in robots that’s just as groundbreaking.
In the coming months, we expect Tango may be strapped to more than just drones. Google’s delivered some 200 of the prototype phones to researchers and companies.
While many of these may be focused on what they might do with a 3D sensing smartphone—no doubt others will be velcroing and duct-taping Tango to drones, robots, and more.
Image Credit: Vijay Kumar/YouTube