The University of Pennsylvania has been elevating students’ understanding of literature since 1740…and robots’ since 2011. UPenn’s GRASP Lab has taught one of Willow Garage’s PR2 robots how to read out loud – the man-sized research platform is able to locate text in its environment and convert it to speech. In the video below, graduate student Menglong Zhu walks the PR2 through an impressive variation of fonts in real world settings to show off how well the new literacy code works. Posters, emergency signs, even handwriting – the GRASP PR2 reads them all and at strange angles and orientations as well. It’s pretty damn impressive to see a full sized robot rolling around and reading what it sees. Makes you wonder when it will be invited to storytime over at the local elementary school.
Zhu does a great job demonstrating all the different real world text the PR2 can handle in the following video…but it does get a bit repetitive. Feel free to skip around.
The PR2 is far from the first literate robot, there have been automated machines with optical character recognition (OCR) for decades. More recently, 2009 saw a bot in Japan that could read books, and last year a different machine in the UK did the whole “roaming and reading” routine. There are two things that set the work at GRASP Lab apart, however. First, the PR2 itself. This isn’t a dedicated reading robot, or a small cart on wheels with a camera and OCR- it’s a life-sized humanoid that can accomplish a whole range of different activities. Instead of building a robot to read books, GRASP took an existing robot and taught it to be literate.
Secondly, and perhaps more importantly, the results produced by Zhu (with Post Doc Kosta Derpanis and Professor Kostas Daniilidis, by the way) are open source. As with all the Willow Garage PR2 robots that have been given to research institutions like UPenn, the work here is going to be freely shared through the Robot Operating System (ROS) library as open source code. It’s not just the PR2 then that benefits from Zhu’s work. Ostensibly, one day ROS enabled machines everywhere could use this code to make themselves literate – it’s like Hooked on Phonics for robots.
I’m really interested to see how this work gets shoe-horned into other projects. We’ve seen how a single innovation -the adaptation of the Kinect 3D sensor, has lead to an explosion of ideas for ROS. Might a reliable literacy program for the PR2 produce similar results? Too soon to tell, but the possibility alone highlights how powerful and accelerating the concept of open source robotics can be. Today we’re teaching robots how to read. Tomorrow maybe they put what they read to good use. …That or they get distracted by romance novels. You never know with robots.
[screen capture and video credit: DreamDragon1988 (Menglong Zhu)]
[source: ROS]