As the week of the Game Developer's Conference hits the mid-point, we've already had some major announcements hit the AR space. The specific timing of these announcements are thanks in part to a conference within a conference called VRDC, aimed at VR, AR, and MR developers. And while the week is hardly over, the announcement that is still having a big effect on the developer population is the reveal of the Creator Portal for the long-awaited Magic Leap One device.
After spending a few years teasing and hyping a product considered vaporware by many, we got our first look at the device in December. And with this week's software reveal, we now get a deeper look at many of its internal features, at least from a development standpoint. This is important as it gives us some idea of what the device is capable of outside of assumptions based on hardware specs.
For nerds on the level of Next Reality, there's a lot to unpack in the Lumin SDK, the development documentation, the API itself, a Magic Leap simulator, and tools for Unity and Unreal. So while we surely have not yet discovered every juicy bit there is, we do want to bring our readers along on this journey. And to start this we are going to look at the Magic Leap One's gesture system.
One thing that Microsoft has done really well with the HoloLens, which helped it stand apart from all other smartglasses that have come out, has been its inclusion of a gesture system. This system provides the user with the ability to interact with virtual objects in a way much more natural than the point and click interface; simply using our hands directly. Unfortunately, up to this point, the HoloLens only has a few gestures; Air-tap, Air-tap and hold, and the gesture known as Bloom.
Starting with a barebones UX approach has arguably helped HoloLens developers and designers find interface solutions that are more natural, but can also limit the complexity of the potential workflows. So while this was likely a calculated minimalist approach on the part of Microsoft, asking for "custom gestures" has become a common request from HoloLens developers chatting in forums. At this point, it seems more like a running joke of sorts. (Unity users know what I'm talking about...nested prefabs, anyone?)
It would appear that Magic Leap has been watching this pretty closely, as the company has come out of the gates ready for battle. According to the documentation for the Magic Leap One, the device will initially have eight gestures (see below). This more than doubles Microsoft's current offering.
Of course, there are a number of questions that come to mind in terms of functionality that simply can't be addressed until we see the device in action. How fast does the device see and react to the gesture? Can it handle chained gestures for more complex tasks? Can it see and understand two hands? We just don't know yet.
On the topic of two-handed gesture manipulation, it's worth noting that recently a pull request appeared in the Mixed Reality Toolkit (the SDK for the HoloLens), which is supposed to bring two-handed manipulation to the HoloLens. The HoloLens seeing and responding to the actions of two hands at once could set it further apart from the competition.
But regardless of how the Magic Leap One responds to control input, having eight gestures out of the gate is a pretty solid base for developers to work from. It will allow for far more complexity in the spatial computing workflow—for better or for worse.
- Follow Next Reality on Facebook, Twitter, Instagram, YouTube, and Flipboard
- Sign up for our new Next Reality newsletter
- Follow WonderHowTo on Facebook, Twitter, Pinterest, and Flipboard
Cover image via Magic Leap
Comments
No Comments Exist
Be the first, drop a comment!