Unity Labs to Bring Tools to Simplify Mixed Reality Development
In a press event this past week at the Game Developers Conference in San Francisco, California, Unity Labs, the experimental and forward thinking arm of Unity, announced an upcoming toolset for developers in the augmented, mixed, and virtual reality space called the XR Foundation Toolkit (XRFT).
Timoni West, Principal Designer at Unity Labs, made the announcement at the early morning keynote.
It's a framework for XR developers that allows anyone, not just programmers, but artists and directors— random people who want to get into immersive design—we want to empower you to quickly get up to speed and start making experiences without needing to start from scratch. We want to give you the building blocks for interaction and locomotion and everything else you need.
This new set of tools is designed for post-reality developers and enthusiasts alike, to greatly reduce the complexities of getting started, and to get rid of certain processes that have to be repeated constantly. The current set of features that will come with the initial release of the XRFT are as follows:
- cross-platform controller input
- customizable physics
- AR/VR-specific shaders
- object snapping and building systems
- debugging and profiling tools
- support for all major AR and VR hardware systems
XRFT was not the only announcement that Unity Labs made at the GDC Keynote. EditorVR, a new VR-based room-scale design system—after being shown off last year, followed by an alpha release in December—has now been downloaded over 6,000 times. In the period of time since its release, a number of the developers have created their own extensions to this new system.
To further foster the creative approaches that developers are taking, Unity has decided to hold a contest with a cash prize (currently an undisclosed amount), and the winner will be showcased and promoted at Unity's Vision VR/AR Summit 2017 this May.
Looking at the EditorVR tools that were showcased at the keynote event, one thing is obvious—between deep learning-based gesture mapping macro systems, VR-based meshed editing, and multiuser real-time scene editing, the competition level is really high.
Having recently gotten an HTC Vive, as soon as I have time to hook it up, EditorVR will be the first thing I try. The idea of creating HoloLens software in VR just makes me tingle all over.
Do you develop with Unity? What kind of development processes could Unity make to simplify your workflow? Let us know in the comments below.