Reality 2.0: A Way to Upgrade Your Perception of Reality

Imagine enhancing your perception of the environment around you, other people nearby, and even yourself. Imagine reliving past experiences as if they are happening right now. Imagine a crumbling border between physical and virtual reality. Such a future is closer than you might think.

The basic premise of reality 2.0 is to bridge the gap between two worlds: The physical reality we experience and a possible digital reality that you imagine. This would go beyond overlaying digital things on the physical world, like a virtual Pokémon on the table, using current augmented reality.

Reality 2.0 is literally replacing your visual perception of the world with a fully virtual 3D environment.

How might this work? Imagine taking an exact laser scan of your current environment—your living room or office space, or any other thing—and using this to create a virtual 3D replica of everything around you. Now, re-create the room in VR and overlay the real world with the virtual experience. If you get there, you’re golden. You are experiencing an upgradable version of reality in which you can modify the elements of your virtual world however you desire.

Reality 2.0 might seem like a distant future. But it isn’t.

To showcase the possibility, I sat down for three days—limiting myself to a short amount of time—to build a simple prototype in my living room. Using a laser scan from a Hololens, I created a virtual version of the room. Because my virtual view is synchronized with my actual living room, it becomes a room-scale VR experience that replaces my perception of the world with the digital version created in a 3D engine.

Welcome to a World Only Limited by Your Imagination

Full control over your perception will let you do some pretty spectacular things, like replacing a random person with a famous actor in real-time, transforming your current room into a beach hut, re-experiencing any given moment that happened in this room, and much more.

I had to limit myself to simple examples in my prototype, given the short amount of time.

Here are three examples.

1. The ability to change parts of your environment

This one is all about changing aspects of the objects in your world, like the color or material it is made from. You could also replace the object itself, as long as you keep the basic geometry intact: My couch might as well be an ancient roman sofa in my perception, as long as I don’t bump into it in the real world.

The basic idea behind replacing objects is the adaptability of the human brain. One of the most interesting experiments in this area is the rubber hand experiment, where subjects started associating themselves with a rubber hand, even experiencing phantom pain, all because of the visual impression that the rubber hand is a part of their body.

My theory: Changing objects or aspects of objects, like turning wood into stone, will be hugely compensated by the brain’s natural adaptability in an extended experience of the new environment.

2. A useful virtual workspace

People are used to a certain type of work that current virtual worlds fail to deliver. You can move freely in your office wearing VR goggles if your virtual world matches your real one. You can avoid bumping into obstacles or spilling a real cup of coffee on your phone—all while being in a virtual world. This is very handy if you are working with 3D objects. Being in a cubicle can finally be somewhat entertaining. You can experience meetings or pre-recorded lectures as if you are there or talk to a person on the other side of the world, while standing right in front of your eyes—at least in your personal perception.

Many jobs don’t really need physical interaction in an office most of the time, but if you’re working remotely, it would help a lot to create a feeling of interaction akin to face-to-face interaction and more immersive than a video call. And it would sure save a lot of money and traffic on the roads and in the skies.

3. Experience different places

It’s easy to understand why tearing down your wall to stare at a ginormous screen is way cooler than watching your normal TV. So, I’ve included this scenario in the dummy. A part of the environment is still real—the part that is close to you or important. You could easily imagine futuristic games or home applications that happen around you and merge your virtual and real experience.

Something I would have loved to include in the dummy but did not fit within the initial timeframe is virtual travel. You could turn your living room into a cabin in the mountains, your walls into wooden beams, and the outside world into a mountain scene. Or how about a beach hut — where your couch turns into rattan and your environment into a beach scene. Essentially, you’ve got a magic wand slightly adjusting everything around you, while keeping the basic geometry intact.

Think one step further and you will be able to merge different parts of the world into one in real-time, by adjusting the position of objects while keeping the relative distance intact.

How Does It Work?

There is more to the idea than meets the eye. Re-creating the real world in a virtual world in real-time, needs at least four things to work:

1. Sensors to track and record your current world

First, you need to create a snapshot of your current environment, ideally using structure sensors instead of a normal camera. A structure sensor is essentially a laser scanner, sending rays in a certain direction and measuring the points they hit to get the color data and position. New VR goggles are already equipped with such sensors, at least if you follow the Microsoft Mixed Reality approach.

As a bonus at this stage, you could create 3D recordings of any environment and play them back later. Imagine for a second that this kind of technology already existed a few decades ago. What a huge improvement it would be compared to staring at a burned out picture in a photo book.

2. A system to recognize things and their properties

You also need a way to interpret your sensor data, something that recognizes the basic geometry of your current environment, finds all the objects and correctly classifies them, and maps their position relative to the room and to each other. The good part about this step? You only need to map and recognize the things the user is looking at, and you can add more details if required.

3. A way to create and display everything

The third step is to create a virtual world representing your current physical world as much as possible. Either by automatically creating new 3D objects from the collected data or using some shortcuts like the Euclideon Unlimited Detail engine. The higher the quality, the better the immersion.

4. A room-scale VR experience

Last but not least, you will need a VR system that allows you to move freely in your virtual space—like some current and most future VR goggles will do. The only tricky part is to overlay the virtual world right on top of the real one.

Why Is It Better Than Current VR/AR?

An obvious question at this point might be: Why should we use reality 2.0 in a world with working VR (virtual reality) and AR (augmented reality)?

Why is it better than current VR?

Currently, VR is centered on experiencing fully virtual worlds. As a result, people lose touch with reality. I’ve witnessed more than a few people experience VR for the first time in their lives. The first thing they always do is move. In a room-scale experience, like the HTC Vive, they can move—until they bump into a table or a wall because they are not familiar with the secure zone. Losing touch with your physical reality leads to a lot of problems. Right now, there is no solution for this using current VR technology.

Why is it better than current AR?

AR could accomplish the same thing as reality 2.0 in the long run. There is still quite a long way to go, however. Creating reality 2.0 in AR requires much more processing power and much better AR goggles than, for example, a Hololens. The Hololens has a very narrow field of view. This can and probably will be fixed in the near future. But you are still left with the problem of removing or replacing an object using AR. Replacing something with nothing requires a lot of optical trickery and essentially many of the things reality 2.0 does—like recreating room geometry and building 3D scenes in real time. Compared to a fully virtual world, however, your options are much more limited and the experience won’t be as seamless.

Reality 2.0 is not about replacing VR or AR. It’s more like a lovechild—a combination of both technologies, delivering promises that each individual technology fails to deliver.

A Not-So-Distant Future

There is quite a lot of work that needs to be done to actually make reality 2.0 a real thing — pun intended. This whole concept will only work on a large scale if you can add any new object to the scene, and it will be automatically recognized, turned into a virtual 3D object, and added to your virtual environment in real-time. This obviously requires a lot of automation.

Luckily, we are living in the age of automation thanks to AI:

  • Object detection algorithms are really good now and can detect objects in real-time beyond the human capability, even while using a standard camera (not even structure sensors).
  • Speak of the devil: Structure sensors and inside-out-tracking is going to be a default thing for the new mixed reality platform Microsoft and partners are trying to push into the market.
  • Ever heard of meta-objects? This could be an enormous shortcut to making this happen. Meta-objects can most easily be described as a super-class object of the same type. A white leather couch, for example, is essentially a subset of a meta-couch. Meta-objects are used in some interesting AI algorithms that can interpolate every individual object from a super class.
  • Add multi-user support and a central database for objects, materials, and spaces and you could save new users a lot of time by using pre-existing data.
  • Add layers based on how often the objects you are trying to scan are changing.

All that is bound to take some work to be fully adaptable to every environment. But give the current technologies just a bit more time, combine all the things that already exist, and we should be good to go.

A few years from now: Imagine using ultra-high-resolution goggles where your eyes can no longer make out pixels on the screen. These will get smaller and lighter and be hooked up to more computational power and better 3D engines. Eventually, you’ll swap out the goggles and wear something like contact lenses, a futuristic idea that some companies are already working on.

Imagine a not-so-distant-future, where it’s totally natural to upgrade your perception of reality and the border between real and virtual begins to crumble.

If you like the idea and want to offer your support, please feel free to contact me.

Image Credit: Maxim Roubintchik via Unsplash / Pixabay

Maxim Roubintchik
Maxim Roubintchikhttp://www.roubintchik.com/
I'm interested in the combination of different areas in order to grasp the bigger picture. I'm mostly concerned with topics such as AI & Machine Learning but also very interested in general human behavior and Neurobiology. I guess I'm more a generalistic type of person - always looking for life hacks.
RELATED
latest
Don't miss a trend
Get Hub delivered to your inbox

featured