With the amazing video recording and production systems commercially available, how hard can it be to create a realistic virtual reality environment? I mean, all we really need are some cameras, some computers, and a video screen, right? Well, Virtualization Gate, a new project from the INRIA and Grenoble University in France, debuted at SIGGRAPH in early August. Apparently realistic VR is harder than it looks. Or maybe everything else we encounter, cutting edge video games, CGI films, and high definition projectors, just make VR seem less real than we would like. Check out the Virtualization Gate demo video after the break and marvel at their big green room of goodness.
What Virtualization Gate does well is track the user’s motion. A system of multiple cameras, a PC cluster, and a head mounted display (HMD) the size of Sputnik are used in conjunction to place the user precisely in the virtual world. With 20 frames per second, the VR graphics actually appear to move reasonably well in real time. This allows our demonstration gamer to kick over some urns, push around a virtual copy of himself, and even stare at his VR avatar in the mirror. Not bad, VGate, but not mind-blowing.
So credit to INRIA and Grenoble for putting together or creating the best technologies at hand to get what appears to be a fairly acceptable VR system. But I’m sorely disappointed. Not in this group, who I applaud for making the attempt, but in the whole VR concept. I mean, do any of us even really understand what we hope to accomplish with VR?
With augmented reality the purpose is much clearer. We add digital information into the real-world view to make it more useful or even just cooler. Totally immersive VR, however, is not so easily defined. Do we want to touch as well as see and hear? If so, we’re going to need a system of haptics, maybe some version of the tactile holograms system also seen at SIGGRAPH. Do we need to smell? Taste? Should one’s sense of gravity, pain, body position, or time also be controllable?
One of the problems is that virtual worlds are out-pacing virtual interfaces. When we watch a VR environment on a movie screen, it follows a single view. This allows for the maximum amount of complexity in the image. The same for sound. Adding an interface, the HMD and cameras, and you’ve jumped the single view track. Now your VR environment has to really work its physics engine, and obey a set of interface rules that gobble up processing speed and limit the reality of the images. Movies and games are poised to cross the uncanny valley, VR seems destined to be mired within it.
I look at our earlier story about Braingate, a device which monitors motor neurons to control cursors or mechanical devices, and I see a much greater possibility for VR than when I look at VGate. Cameras and big green rooms seem almost childish when compared to directly reading brain signals. Add in the analysis of mental states provided by fMRI scans of the brain, and it seems like that avenue is a lot closer to creating a VR interface than anything else on the market.
Why not harness the brain for processing as well as for interfacing? Every night most of us have dreams that put VR to shame. Devices that could take advantage of the brain’s inherent ability to generate virtual worlds with changeable physics would be ideal. The concept may sound far-fetched, and I don’t really want to jump into some crazy discussion about lucid dreaming, but is it that much nuttier than thinking we’ll ever get a believable VR from PCs and HMDs? Developing a truly immersive experience, one that allows the user to experience a virtual world as a real one, is going to be an epic journey. My guess is that traditional methods of virtual reality are on the wrong path. What do you think?