The research team at Google has found yet another way for machine learning to simplify time-intensive tasks, and this one could eventually facilitate Star Wars-like holographic video.
In a recent blog post, Google researchers demonstrated how a neural network is able to separate the subject of a video in the foreground from the background scene in real time.
The research is already paying off, as YouTube will make this video segmentation ability available in stories, its Snapchat-like video format currently in limited beta with select creators. Users will be able to change their video backgrounds without a green screen or other equipment.
- Don't Miss: Google's New Pixel AR Stickers for 'The Last Jedi' Let You Take Star Wars Characters Anywhere
"Our immediate goal is to use the limited rollout in YouTube stories to test our technology on this first set of effects," wrote Valentin Bazarevsky and Andrei Tkachenka, software engineers at Google who led the project. "As we improve and expand our segmentation technology to more labels, we plan to integrate it into Google's broader Augmented Reality services."
Google's computer vision approach could make a similar experience available to a much larger audience. In fact, the team designed the convolutional neural network (CNN) that does the heavy lifting to be light enough to perform on mobile phones at high quality. The method is capable of greater than 100 frames per second (FPS) on iPhone 7 and more than 40 FPS on Pixel 2.
Apple and Google have shown how software can help cut corners in facilitating augmented reality experiences as a substitute for depth sensors. But Google is taking that approach a step further with machine learning. With an eventual eye toward its AR services, it might not be long before holographic video chat is a real thing.