When it comes to virtual reality, seeing is believing. But after those first few fascinated moments, you realize something: Seeing is often all you’ll be doing.
Although nailing virtual visuals is very likely the tipping point for the technology, it is also just the tip of the iceberg. To date, we lack easy, intuitive mobility in virtual worlds, and if VR is to go mainstream, it’s a problem that has to be remedied.
The good news? The world of inputs—or the tools that will bring your body into virtual reality—is packed with exciting experiments. But it’s also still early days, and that same world is currently a riot of gadgets with limited cohesion.
While consumer virtual reality really is nearing launch, the challenge posed by inputs is significant. It holds the key to just how freely we can roam virtual realms. With the visual hardware advancing rapidly and developers hard at work on the first wave of content, inputs are the frontier—and we’re just starting to map it.
What then is the lay of the land, and what paths will lead us through the wild?
There are three basic approaches: Things you hold, things you wear, and things that sense you at a distance. Of course, from the Nintendo Power Glove to Microsoft Kinect, these approaches aren’t new, and none is perfect in every scenario. The likeliest outcome will be an integrated system uniting multiple approaches.
Things You Hold
The first thing you do after strapping on a VR headset is take in the view. The next impulse? Try to reach out and touch something. We humans are all thumbs and brains. The hands are packed with nerves and, of course, they’re our primary mode of interacting with and manipulating the world. It only makes sense we start there.
Traditionally, we’ve used hand-centric inputs (mouse, keyboard, game controller) in computing. We can use these tools for VR too—and likely will in some instances early on—but they aren’t perfectly tailored to the experience.
When you’re working on your computer or playing a video game, you can see your hands. And while you might not need to stare at them, a quick orienting glance is useful. But in virtual reality, wearing a headmounted display, you can’t see your hands. Groping around to find your input breaks that sense of presence.
A number of companies are working to adapt these traditional inputs to VR. The most critical modification? VR controllers, tracked by sensors, show up in front of you as a pair of virtual hands, providing that critical visual input.
The Oculus Touch, for example, is a wireless half-moon controller for each hand. The controllers sense when you’re touching them and use a simple configuration of trigger and buttons to bring you a working pair of virtual hands. The HTC Vive controllers are also designed to be held, one per hand and manipulated with a trigger and touchpad (they’re tracked by HTC Lighthouse—more on this later).
Beyond simply manipulating objects, handheld controllers offer another benefit; you can design them to provide “haptic” feedback (or a rudimentary sense of touch). The Tactical Haptics controller, for example, uses a series of sliding pads to create friction and a sense of resistance or weight in your hand. Others use vibration to communicate impact, like when when a virtual golf club contacts a virtual golf ball.
Think of these as video game controllers tailored for VR. They’re likely to be the earliest, simplest, and cheapest VR inputs, but they likely won’t be alone for long.
Things You Wear
Ask someone to conjure up a virtual reality rig in their head, and chances are they’ll imagine something wearable—from a simple glove to a wired-up jump suit. And for good reason. In one sense, wearable inputs are the best of all worlds.
Instead of relying on triggers and buttons, wearable inputs can accurately bring your actual hands (and individual fingers), arms, and legs into the digital realm. This more closely mirrors the real world, allowing for more intuitive interactions, and because they’re still in contact with your body, they can provide haptic feedback too.
It’s still early for wearable VR inputs, but a few firms are working the problem. Nod, for example, provides minimalist skeletal tracking with just a pair of smart rings. Others are working on sensored gloves and body suits to provide more complete tracking.
We had the chance to chat with the guys from Perception Neuron at this year’s Silicon Valley Virtual Reality (SVVR) conference (see below). They offered demos of their experimental VR glove, but the company is working on a full body system too.
Wearable inputs are great—but the next solution is awesome because it doesn’t require you to hold or wear much or anything at all.
Things That Sense You at a Distance
Think Microsoft Kinect, only more accurate. Systems that sense you at a distance can’t deliver haptic feedack, but they can bring your body into the virtual realm with few or no wearable or held sensors. That’s attractive for inferring position, but it’s critical for body parts that are less amenable to sensors, like your eyes and face.
Add a Leap Motion device to a headmounted display, for example, and you can track hands (inside the range of the sensor) with high precision. And perhaps the most anticipated system is HTC’s Lighthouse which relies on two small boxes mounted in a room. Each box shoots out LED and laser pulses that, coupled with the Vive headset and controllers, track your position. Lighthouse allows you to actually move around the room—and translates that movement into virtual reality.
Clearly, real world objects—like walls or furniture—limit how far you can move, so Lighthouse sets boundaries and warns you as you approach them. Even so, some might dedicate a room to VR. (Truly free-range VR may be more of a theme park experience in obstacle-free warehouses, or rely on bulky inputs, like multi-directional treadmills.)
Additionally, Fove, High Fidelity, and others are working on face and eye tracking tech. Together, these approaches make social interactions in VR much more emotionally engaging. When you smile and make eye contact with the avatar in front of you, your own avatar mimics your expression, and your friend can laugh, frown, or wink back.
One drawback of using only sensing-at-a-distance inputs is that you can’t provide haptic feedback. But there is one possible solution. Ultrasonic forcefield technology, like that made by Ultrahaptics, can communicate a sense of touch (pressing a button, for example) using naught but high frequency sound waves.
The (Sci-Fi) Future of No Inputs Whatsoever…?
Some science fiction depictions of virtual reality more closely resemble the current ensemble of inputs under development (e.g., Ready Player One). Others do away with inputs and send virtual worlds directly to the brain (e.g., The Matrix).
How close are we to the latter? Although brain-computer interfaces (or BCIs) are advancing—a world of direct-to-brain VR isn’t near. This scenario presupposes an understanding of the brain vastly surpassing today’s and a nearly unimaginable level of granular control and stimulation distributed throughout the brain.
That said, there are already non-invasive EEG devices that can read brain activity well enough to allow us to control a cursor on a screen just by thinking about it. Coupled with other inputs, perhaps such technology might communicate intent, increasing accuracy and decreasing latency for a more seamless experience.
Further, while inputs will remain a critical part of virtual reality for the foreseeable future—the sensors driving them may become nearly invisible. Indeed, a prime driver of the recent push for virtual reality has been the increasing power and decreasing size of sensors driven by smartphones. This trend is likely to continue.
We’ll Soon Be Free to Roam the Virtual
Although the field of VR inputs is wild and somewhat splintered currently, expect it to move fast in the coming years. Winning strategies will combine multiple approaches into a sleek, simple, and intuitive interface—freeing us to roam the virtual at will.
Image Credit: Shutterstock.com, Oculus