The year in augmented reality 2019 started with the kind of doom and gloom that usually signals the end of something. Driven in large part by the story we broke in January about the fall of Meta, along with similar flameouts by ODG and Blippar, the virtual shrapnel of AR ventures that took a wrong turn has already marred the landscape of 2019.
And while layoffs across various industries, as well as a generally uncertain economic outlook, have frayed the nerves of tech analysts and startups alike, I have an alternative view of what we've seen in the first few months of the new year.
Like many emerging industries before it, AR is in the midst of a correction. And, sooner than some think, this correction will lead to the AR reality many have envisioned in years past.
Those who were just a little too early and ran out of fuel have nevertheless seen their initial dreams validated, and will try again. Some who toiled in the skunkworks of larger AR ventures and began to see daylight are launching their own AR ambitions. And players who could afford to spend many millions to experiment with AR are finally realizing where their focus should lie are redirecting their resources to focus in on the foundations of their role in next phase of computing.
The storylines aren't neat and tidy as some would like. Immersive computing is one of the deepest yet inchoate spaces in tech, and to truly grasp what's happening you need to lift your head out of your particular niche within it and take in the entire picture.
What do we see? Well, yearly predictions are easy; just follow the money. What's more difficult to discern, and far more valuable, is identifying the areas where the critical moves will be made. How do all the divergent and intersecting lines come together to form the polygons and planes that will resolve into the simulacra making up our new computing reality?
Here are a few hints...
Let's start with the simple stuff: Categories of emergent technologies.
Despite decades of research and development, both virtual reality and augmented reality can be reasonably called edge technologies. Virtual reality is the technology that has penetrated the mainstream most vigorously in recent years, primarily through the efforts of companies like Facebook (Oculus), HTC, Google, and others.
Meanwhile, because of the similarities of their immersive computing dynamics, and the focus on visual interfaces as opposed to tactile interfaces, the two categories are frequently mentioned in the same breath. One very famous research paper describes the two as existing along the same spectrum. But despite the similarities, the longer you investigate both with any meaningful depth, the more you begin to realize how different these two spaces are.
One involves becoming a different person, in a different reality, often with a different face and body. Within this dynamic, distance means nothing. The laws of physics mean nothing. Now that No Man's Sky has been ported to VR, the procedurally generated virtual universe is your oyster (making the notion that this may all be a simulation a more tantalizingly realistic concept).
On the other hand, augmented reality, at least for now, primarily encourages us to interact with the real world. Whether the function is entertainment or enterprise work, the AR dynamic is usually tied to something rooted in the real world. That's its strength. Naturally, many have considered merging these two technologies, and the similarities of the two make such a prospect incredibly attractive. The idea of switching between virtual worlds and the augmented real world seamlessly with just a vocal command, or the blink of an eye, seems like an incredibly powerful prospect.
But I suspect that the people who most promote such a hybrid device haven't spent enough time in both dynamics, separately, for extended periods, examining the practicalities or lack thereof, as well as the various long-term use cases.
You can find some of these flawed approaches in AR experiences that attempt to mirror the dynamics of VR apps. Often, using VR approaches (interface, movement, game mechanics, etc.) in AR fails to harness the true powers of AR by removing its root of interaction with the real world. I've tested hundreds of VR and AR apps over the years, and it's common to encounter an otherwise polished AR app that lets you see the real world, but keeps you stuck in one place or limited area, as you might be in VR.
The original Google Glass may have been too weird for some, but at least it was focused on pushing you through the real world as opposed to keeping you stationary. But as both spaces mature, we will gradually see a decoupling of the two dynamics.
Facebook's new Oculus Quest may eventually add a pass-through AR component, but it's not a selling feature of the new device. Despite its limitations — and the possibly problematic association with Facebook's data privacy issues — the Oculus Quest is the best chance VR has to go truly mainstream. And if it does, it's not likely that pass-through will be part of that device's near-term future as a primary feature set.
Yes, Facebook's Michael Abrash has shown off renderings of AR/VR hybrid devices as the dream, but more than dreams, what drives product features in the real world are consumer demand. In the near-term, I don't see that urgent demand for a hybrid device. (In fact, I think the eventual fate of VR will lean more toward high-end, specialized VR systems, but that's another story.)
Sometimes limitations aren't the problem, but the solution. By focusing on AR as a completely different discipline from VR, the possibility exists to make far more progress when attempting to develop engaging and repeat-use apps and experiences.
All that said, I do envision a day when AR/VR hybrid devices might work seamlessly to give us even more virtual superpowers. For example, Google Earth VR is one of the most engaging uses of VR. With the recent addition of Street View to the app, it's the closest thing we have to a transporter deck today.
Imagine traveling to Paris in Google Earth VR, strolling its streets, and then deciding you'd like to turn on the AR mode and actually interact, in real time, with other people (equipped with AR glasses) who are actually walking the streets of Paris, all facilitated by an AR cloud and 5G speeds. You might appear as a ghost-like avatar to them, but your virtual ghost could wave down a real person on the Parisian street to ask about a local patisserie. Location data and the AR cloud would allow you and the localized person to behave as if you're both in the same city. And when done, you could switch back into VR mode, bring up a giant Earth globe and transport your avatar to Chile to investigate the best empanada place.
This reality isn't just possible in the future; it's likely. AR apps like Spatial and others are giving us a hint at what this reality might look like in the future.
But in the near-term, to bolster both platforms and educate the masses, baby steps into each will likely be the best path forward toward educating the mainstream about the unique capabilities of each. By focusing on the two areas as separate disciplines, we can avoid the identity crisis suffered by some of history's failed hybrid devices and spur the growth of both AR and VR as each moves toward the same inevitable destination — interactive "everything."
Perception: AR and VR are two sides of the same coin, hybrid AR/VR devices are inevitable and the goal.
Next Reality: AR and VR share developers but have different users and uses, and will see increasing separation in the near-term.
This post was created as a part of our Future of AR series. View the whole series.