/* ---- Google Analytics Code Below */

Thursday, December 30, 2021

Augmented versus Virtual Meta-Verses

 A space we also explored, in both senses.  Harder yes, but not sure the user cares if the value is provided.   And does not matter otherwise.  

Why AR, not VR, will be the heart of the metaverse

Join gaming leaders, alongside GamesBeat and Facebook Gaming, for their 2nd Annual GamesBeat & Facebook Gaming Summit | GamesBeat: Into the Metaverse 2 this upcoming January 25-27, 2022. Learn more about the event. 

This article was contributed by Louis Rosenberg, CEO and chief scientist at Unanimous AI

My first experience in a virtual world was in 1991 as a PhD student working in a virtual reality lab at NASA. I was using a variety of early VR systems to model interocular distance  (i.e. the distance between your eyes) and optimize depth perception in software. Despite being a true believer in the potential of virtual reality, I found the experience somewhat miserable. Not because of the low fidelity, as I knew that would steadily improve, but because it felt confining and claustrophobic to have a scuba mask strapped to my face for any extended period.

Even when I used early 3D glasses (i.e. shuttering glasses for viewing 3D on flat monitors), the sense of confinement didn’t go away. I still had to keep my gaze forward, as if wearing blinders to the real world. There was nothing I wanted more than to take the blinders off and allow the power of virtual reality to be splattered across my real physical surroundings.

This sent me down a path to develop the Virtual Fixtures system for the U.S. Air Force, a platform that enabled users to manually interact with virtual objects that were accurately integrated into their perception of a real environment. This was before phrases like “augmented reality” or “mixed reality” had been coined. But even in those early days, watching users enthusiastically experience the prototype system, I was convinced the future of computing would be a seamless merger of real and virtual content displayed all around us.

The 2nd Annual GamesBeat and Facebook Gaming Summit and GamesBeat: Into the Metaverse 2

January 25 – 27, 2022

Augmented Reality Research 1992 (USAF - L Rosenberg)

Cut to 30 years later, and the phrase “metaverse” has suddenly become the rage. At the same time, the hardware for virtual reality is significantly cheaper, smaller, lighter, and has much higher fidelity. And yet, the same problems I experienced three decades ago still exist. Like it or not, wearing a scuba mask is not pleasant for most people, making you feel cut off from your surroundings in a way that’s just not natural.

This is why the metaverse, when broadly adopted, will be an augmented reality environment accessed using see-through lenses. This will hold true even though full virtual reality hardware will offer significantly higher fidelity. The fact is, visual fidelity is not the factor that will govern broad adoption. Instead, adoption will be driven by which technology offers the most natural experience to our perceptual system. And the most natural way to present digital content to the human perceptual system is by integrating it directly into our physical surroundings.

Of course, a minimum level of fidelity is required, but what’s far more important is perceptual consistency. By this, I mean that all sensory signals (i.e. sight, sound, touch, and motion) feed a single mental model of the world within your brain. With augmented reality, this can be achieved with relatively low visual fidelity, as long as virtual elements are spatially and temporally registered to your surroundings in a convincing way. And because our sense of distance (i.e. depth perception) is relatively coarse, it’s not hard for this to be convincing.

But for virtual reality, providing a unified sensory model of the world is much harder. This might sound surprising because it’s far easier for VR hardware to provide high-fidelity visuals without lag or distortion. But unless you’re using elaborate and impractical hardware, your body will be sitting or standing still while most virtual experiences involve motion. This inconsistency forces your brain to build and maintain two separate models of your world — one for your real surroundings and one for the virtual world that is presented in your headset.

When I tell people this, they often push back, forgetting that regardless of what’s happening in their headset, their brain still maintains a model of their body sitting on their chair, facing a particular direction in a particular room, with their feet touching the floor (etc.). Because of this perceptual inconsistency, your brain is forced to maintain two mental models. There are ways to reduce the effect, but it’s only when you merge real and virtual worlds into a single consistent experience (i.e. foster a unified mental model) that this truly gets solved. .... '

No comments: