Jason Silva calls technologies of media “engines of empathy.” They allow us to look through someone else’s eyes, experience someone else’s story—and develop a sense of compassion and understanding for them, and perhaps for others more generally.
But he says, while today cinema is the “the cathedral of communication technology,” looking to the future, there is another great medium looming—virtual reality.
Expanding on the possibilities embodied in the Oculus Rift, Silva envisions a future when we inhabit not virtual realities but “real virtualities.” A time when we discard today’s blunt tools of communication to cloak ourselves in thought and dreams.
It’s an electrifying vision of the future, one many science fiction fans have imagined. At present, we’re nowhere near the full digital duplication and manipulation of reality Silva describes. But if we don’t dream a thing, it’ll never come to pass.
Sometimes we can see the long potential of tech and are awed by it, even though we don’t know how to make it happen yet. All new technologies begin in the mind’s eye like this. “We live in condensations of our imagination,” Terence McKenna says.
Realization can take years; the engineering process can fizzle and reignite—go through a roller coaster of inflated expectations and extreme disillusion. Eventually, we get close enough to the dream to call it a sibling, if not an identical twin.
So, what will it take to get to Silva’s real virtuality? Let’s take a (brief) stroll through the five senses and see how close we are to digitally fooling them.
Two items crucial to immersive visuals are imperceptible latency (that is, no delay between our head moving and the scene before us adjusting) and high resolution.
With a high-performance PC and LED- and sensor-based motion tracking, the Oculus Rift has the first one almost nailed for seated VR. As you move your head, the scene in front of you adapts almost seamlessly—as it would in the real world. This is why the Rift is so exciting, it not only makes such immersion possible, it does so affordably.
But what about resolution? It’s acceptable, but could be better.
Currently, the Rift uses a high-definition display—the latest prototype is rumored to be about 2,600 pixels across. You can’t see the dark edges separating pixels (as you could in the first developer kit) but the graphics still aren’t as sharp as they could be.
“To get to the point where you can’t see pixels, I think some of the speculation is you need about 8K per eye [the Rift’s screen is split in half] in our current field of view,” Oculus founder, Palmer Luckey, told Ars Tecnica. “And to get to the point where you couldn’t see any more improvements, you’d need several times that.”
He believes we can get to 8K per eye in next decade. Televisions and mobile devices are the prime movers now, but depending on their success, VR systems may eventually be the motivation for developing the highest possible resolution screens.
Theoretically, how high? Recent research out of England shows the bleeding edge. Scientists there are developing flexible displays with pixels on the order of a few hundred nanometers across—150 times smaller than today.
Surround sound has been available for years in home entertainment systems. But immersive VR needs to move beyond basic directionality toward pinpoint accuracy in space. Further, sounds need to compensate and adapt for your movement.
This too is almost available, if not yet perfected. In Microsoft’s (recently shuttered) Silicon Valley lab, a research team combined head tracking technology like the Rift’s and a 3D-scanned physiological profile of a user’s head to deliver positional audio.
“Essentially we can predict how you will hear from the way you look,” Ivan Tashev, one of the researchers, told MIT Technology Review. “We work out the physical process of sound going around your head and reaching your ears.”
Sony is also working on positional audio for its virtual reality system (Project Morpheus). High-definition pinpoint sound using the same motion sensing and software tricks enabling the Rift, then, seems plausible in the near future.
Touch, Taste, and Smell
Now, things get a little dicey. While we can imagine providing a sense of touch using jets of air, interactive body suits, or other peripherals—there isn’t anything yet that fulfills this particular requirement in a completely immersive way. Smell and taste may be just as difficult as touch to credibly recreate (sorry Smell-O-Vision and DigiScents fans).
Transporting body parts into the virtual world for interaction is much closer. Groups are already working to adapt sensored devices like hand-held controllers, gloves, suits, and infrared 3D imaging systems (e.g., Kinect or Leap Motion) to link real and virtual bodies.
Unrestricted movement is a harder problem, though specialized treadmills or moving floors might allow us to walk the virtual world without running into a wall.
As we’re developing the ability to walk through the door—we’ll need a place to visit. The earliest VR experiences have been bare-bones adaptions of video game worlds. Game developers are working to more completely adapt existing games for VR. And filmmakers are excited to try 360-degree filming for immersive moviemaking.
Meanwhile, Philip Rosedale, creator of Second Life, is developing a kind of sequel to Second Life for virtual reality’s next act. The software, called High Fidelity, will be compatible with a combination of body sensors and computer vision to reproduce gestures and facial expressions in a virtual body (or avatar) in a virtual world.
High Fidelity, like Second Life, will be open source all the way. That is, the world won’t be controlled from the top down but will instead blossom from the bottom up. Crowdsourced world building allows for otherwise impossible richness and complexity.
Anyone who’s ever been in Second Life knows rendering even a simple shared virtual world takes a fast internet connection and powerful computer. High Fidelity has an interesting solution (for shared virtual worlds) in mind—instead of centralized servers, the job would be distributed between millions of user laptops and devices.
Distributed (super)computing added to continued growth in processing power and faster fiber connections could handle increasingly immersive, realtime worlds.
The Final Frontier
Stephen Wolfram says, “When there’s no reason something’s impossible, it ends up being possible.” We’ve been discussing external devices meant to fool the brain from the outside in—ultimately we may directly stimulate the brain itself.
As the understanding of our brains advances in tandem with the tech to influence them, perhaps we’ll learn to simulate thoughts, visions, and dreams Matrix-like.
The tantalizing tip of the iceberg? Scientists recently announced they’d successfully used EEG to record and transfer thoughts online between brains 5,000 miles apart.
The researchers involved in the project wrote, “We anticipate that computers in the not-so-distant future will interact directly with the human brain in a fluent manner, supporting both computer- and brain-to-brain communication routinely.”
Terence McKenna says this is the final frontier, “Our destiny is to become what we think, to have our thoughts become our bodies and our bodies become our thoughts.”
Image Credit: Shots of Awe/YouTube