Generally speaking, in terms of modern devices, the more simple you make an interface to navigate, the more successful the product is.
The product could be a coffee maker, an electric piano, a smartphone, or computer operating system. This very idea was one reason that Windows grew so quickly after the reign of the DOS prompt and why Apple became the smartphone powerhouse it did.
Simply put, life is complicated and users want easy. According to a report by Forbes, a survey of 97,000 customers put together by the Customer Experience Board found that one major impacting factor to customer loyalty was minimal customer effort.
One thing the booming augmented reality space offers us as developers is a complete reset switch, especially in terms of user interfaces. We can start over completely and look at it from a whole new view.
For this renewal to be successful, we have to step away from the computers and look at how we interact with analog objects — you know, those things we touch with our hands in the real world. I am sure a few of you still knows what that means.
Of course, this is a grand oversimplification.
We are not talking about hitting a button on a remote control to turn on a device, we are talking about completing complex tasks in a way more akin to using tools than a mouse or trackpad. When we were trapped in a world of 2D panel screens, using the windows idea made sense. As a result, it became the driving philosophy behind just about every major operating system that exists. But we are no longer trapped in that 2D panel world anymore.
With this in mind, I have done quite a few experiments in different types of interface design with the HoloLens and other AR-based devices over the last year and a half. While hardly perfect, one of my favorites is the basis for this tutorial series.
Until eye tracking is in all of the head-worn AR devices, gaze, or the direction your head is pointing, is a big factor in the control schemes we create. Most people are using this as the basis for their designs, but it all feels like an extension of the window philosophy, and it is a very clunky and uncomfortable extension at that. In many ways, it feels like developers are just taking the window-based ideas and applying them to 3D. In terms of computers, maybe that is all they have known, but maybe it just being a little lazy as well.
My basic idea is to make it as easy to get to tools as possible, with the user's gaze being the center of the action. If they look at an object, the usable functions for that object appear's within close proximity to the gaze cursor. Of course, head locking is a no-no, so we want to avoid that.
This series breaks down to this for now. There will be more to come.
- Part 1: A Simple Setup
- Part 2: The System Manager
- Part 3: Focus & Materials
- Part 4: Creating Objects from Code
- Part 5: Create the Tool UI Elements
- Part 6: Delegates & Events
- Part 7: Unlocking the Menu Movement
- Part 8: Raycasting and the GazeManager
- Part 9: Moving Objects
- Part 10: Scaling Objects
- Part 11: Rotating Objects
- Part 12: Highlighting UI Elements
- Part 13: Making Scale Mode Selectable
This series will be a big one. While it has been labeled HoloLens Dev 101, as part of a collection, it is hopefully a bit more fulfilling than a beginner tutorial. It will assume you have made it through the setup and install portions of our earlier series.
Now, let's do this.
Just updated your iPhone to iOS 18? You'll find a ton of hot new features for some of your most-used Apple apps. Dive in and see for yourself:
Be the First to Comment
Share Your Thoughts