After dipping its toes into the AR cloud arena last year, Ubiquity6 is now jumping in with both feet this year.
Late last year, the startup launched Display.land, a social network for sharing and editing 3D digital scans of real-world locations. Now, the company is expanding the Display.land with Display.land Studio.
The new web-based tool turns 3D digital objects and their analog counterparts into game spaces for persistent and multi-player immersive experiences that developers can deploy via mobile, desktop, and AR and VR headsets.
Interested developers can sign up for early access to the tool Ubiquity6's website for the platform. In an interview with Next Reality, Ubiquity6 CEO Anjney Midha confirmed that beta access to Display.land Studio is planned for Spring 2020, with general access following in the summer.
The Display.land Studio tool offers a visual programming interface to build AR experiences on top of the 3D models captured via Display.land, with a coding option available for those well-versed in HTML and Javascript. Developers can use built-in templates to design and layout game levels, modify textures of captured environments, and add photos, videos, and 3D objects to the scene. Scripting and customized components enable developers to tweak logic and gameplay. Once complete, developers can publish experiences to the real world via the provided positioning service.
Because the 3D-scanned clones are tracked to centimeter accuracy to its real-world parent, content placed in Studio can be viewed in the exact same location by multiple participants in the corresponding AR experience. For example, with a virtual basketball goal anchored in a real park, mobile users can toss virtual basketballs into the goal while remote users on a desktop browser or a VR headset browser can also play along in the same game.
While the Studio is limited to early access participants, the bones of the tool are actually visible in the wild now via the editing mode that launched when Display.land graduated to general availability. Display.land users can open one of their 3D scans, click share to retrieve a URL link to that model, and then open that link in a desktop browser. From there, users can then log in to access the edit mode, which currently gives them the same options (namely cropping the object, placing virtual items, and adding notes to the scene) available in the mobile app.
A sample of one of these edited worlds is Sk8orDie, a 3D scan of a skate park in San Francisco. The digital twin of the concrete bowl now features stars, coins, and other video game-inspired objects.
The next phase of Display.land experience involves support for localization of virtual content added in Display.land Studio and an AR mode in the app for users to view the virtual content in their physical space. Early access participants for Studio, however, will have advanced access to these features.
Niantic and 6D.ai have gone the SDK route in deploying their AR cloud platforms for integration in mobile apps, similar to how Apple and Google have provided the ARKit and ARCore toolkits for app developers.
Conversely, Ubiquity6 has followed a path similar to those blazed by Snapchat and Facebook. The 3D scanning of Display.land not only acts as a platform for user-generated content but also supplies the digital twin for anchoring persistent content and facilitating multiplayer experiences. According to a blog post from Ubiquity6, Display.land users have scanned more than 92,000 in 142 locations.
Much like Lens Studio and Spark AR, Display.land Studio provides the programming tool for developers to support the users in the Display.land community.
It's too early to say whether Ubiquity6's unique approach gives it an advantage in AR cloud race, but the viral propensity of AR content generated via Snapchat and Instagram, as opposed to content from apps running on ARKit or ARCore, appears to handicap the competition in Ubiquity6's favor.
Cover image via DisplaylandHQ/YouTube
Comments
No Comments Exist
Be the first, drop a comment!