Technology of the future…

I’ve been reading some of the coverage in the tech press about the Facebook acquisition of Oculus VR, including the always excellent insights from Michael Mace in his Mobile Opportunity blog. One thing that’s clear from the commentary and the acquisition itself is a shared belief that there’s a virtual reality “killer app” out there beyond gaming, the (so far) primary use case for the Oculus Rift headset and other similar VR gear. As Mace puts it:

Isn’t it interesting how companies impose their own mental paradigms on technologies? Google looks at glasses and sees a way to search and consume web services on the go. Facebook looks at goggles and sees a new means for social communication.

Investors, I imagine, see dollar signs, and hope that one of those big smart companies figures it all out. Meanwhile, as Mace notes, the fans of Oculus just want a great gaming experience, as the founder of Minecraft explains.

What if, though, there is no “killer app” for VR? I’m not convinced that there is another compelling use case for immersive virtual reality (as opposed to “augmented reality”, of which more below) beyond games and specialized engineering and simulation uses.

I write this based on some experience, having been very active in the last round of the VR “revolution” in the late 1990’s. True, the technology was far inferior to what we have now, and the resolution of the headsets was poor, but as we discovered at the time, there are many limitations to 3D immersive computing that are human, not technological.

VR demo, 1994

The last VR “revolution”, circa 1994

Mace disagrees, predicting a new age of “sensory computing” that includes 3D displays, 3D interfaces, 3D gestures, and 3D printing. This prediction is based on an assertion that humans are comfortable in a 3D environment:

In the real world, we remember things spatially. For example, I remember that I put my keys on my desk, next to the sunglasses. We can tap into that mental skill by creating 3D information spaces that we move through, with recognizable landmarks that help to orient us.

In fact, we humans are very much 2D creatures. Gravity keeps us constrained in the Z-axis, as it were, so we mostly move around in two dimensions (X and Y, to continue to metaphor), and so (I believe) relate better to two-dimensional concepts and interfaces. To Mace’s example, you put your keys on the desk (a 2D space) next to your glasses (and not above them). This “3D” model takes place on on a 2D plane.

In addition to the cognitive limitations, there are other limits to the immersive VR experience, at least today, starting with the odd experience of being able to look around you but not see your own body.

Of course, I may be completely wrong, and not able to see the true potential of immersive virtual reality. I am, though, much more excited about augmented reality (AR), where instead of removing yourself to a virtual world, you add information and interactivity to the real one. This is a case where better technology will make for a completely different experience. Current AR “browsers” like the Wikitude app are cool but cumbersome on a phone. A more seamless AR experience will require more seamless hardware. Google Glass (for better or worse) is one big step in that direction, supported by AR software like Wikitude, Metaio, and many others to come.

Advertisements

Take the multitasking challenge

Steve Litchfield from All About Symbian has a challenge for you; he’s laid out a series of relatively common tasks, and wants to know whether you can complete them in less time than the 53 seconds he demonstrates on his trusty Nokia E6 (Symbian, QWERTY keyboard).

I don’t think I’ll get to 53 seconds on my Android device (T-Mobile Exhibit), but the pull-down status bar in Android menu makes many functions, like checking email, very quick indeed.

How did you do?