Imagine looking at a someone and instantly being fed a visual and auditory stream of information about them, such as their email address, education, marital status, interests, ethnicity, and so on. Imagine using hand gestures in a minority-report fashion to take photos, draw images, and sort information. These capabilities and more recently wowed the audience at the annual TED conference in Long Beach, CA where MIT researchers for the first time publicly unveiled a new technology, dubbed “Sixthsense“.
Taking advantage of technological miniaturization, research student Pranav Mistry was fitted with several devices, including a wearable projector, cell phone, wireless internet access, and a tiny camera, opening new doors to a more data rich and enhanced human reality.
In one demonstration, Mistry simply looks at a boarding pass for a plane flight and suddenly the gate for the flight and its on-time status are visually projected onto the ticket. In a related example, Mistry meets someone at a party and information about the person, such as their blog address, interests, and occupation are projected onto the person for Mistry to view.
A phone keypad is shown projected onto any surface, such as the palm of a hand, instantly turning the hand into a touchscreen phone.
Gestures can be converted into computational actions. By drawing a circle on one’s wrist, a virtual watch displaying the time is suddenly projected onto the wrist. By formulating the hands and fingers into a square shape, a camera was instructed to take a photo of the scene in front of Mistry. Several photos taken in this fashion were later projected onto a wall and then sorted, enlarged, and rotated simply through a series of hand gestures in the air.
The following video shows this exciting technology in action:
As stunning as these demonstrations are, they only scratch the surface of what will become a new digitally enhanced, augmented human reality in the coming years…
Sixthsense represents an exciting paradigm shift in what human reality can and will be. It demonstrates a powerful bi-directional feedback loop: a person’s physical experience is fed into computing devices for analysis and storage while at the same time analysis and information from computing devices and the internet is being fed into the person’s physical world.
No longer will we go to a social event and be unable to recollect the name of a person. We will simply look at them and their name, along with an array of other information, will be fed into our ears and eyes.
No longer will we look at a tree or a monument and wonder what it is. We will simply look at and object and everything we ever want to know about it will be fed to us in a stream of video, audio, and even smells.
Recording of every second of every day of our lives will be stored for later analysis, retrieval, and manipulation. Every conversation we ever have can be similarly be archived and later retrieved.
Joysticks, steering wheels, and other hand manipulation devices can be replaced with more flexible hand gestures that can adapt and change based on the abilities and needs of the user and the device under control.
Many will downplay the importance of Sixthsense by pointing out that many of its capabilities have already been produced in several other applications, including life recording from startup justin.tv and even gesture activation from Nintendo’s Wii. Yet such comparisons miss the point. Whereas justin.tv and the Wii represent narrow examples of human digital enhancement, sixthsense represents a game changing paradigm shift by presenting a true bidirectional, multifunctional feedback loop between the physical world and the digital world, incorporating all five of the human senses. This is the underlying breakthrough from sixthsense that promises to augment human reality in fascinating and powerful ways.