News: Google Researchers Develop Method for Capturing 'Relightable' Volumetric Video

Google Researchers Develop Method for Capturing 'Relightable' Volumetric Video

As the demand for realistic volumetric video for AR experiences begins to grow (along with the available facilities and services for capturing it), researchers at Google have figured out how to improve upon the format.

The team has devised a system, called "The Relightables," which consists of a spherical cage holding 330 programmable LED lights and approximately 100 cameras designed to capture volumetric video.

According to the team, the Relightables system achieves three innovations. First, the system's cameras operate as depth sensors, capturing 12.4MP depth maps of its subject. Second, the team created a geometric and machine learning reconstruction pipeline to synthesize video from the captured data.

Image by augmentedperception/YouTube

Finally, the system also captures reflectance maps, time-synced to the depth maps, that enable the lighting of the subject to be manipulated in an AR or VR scene instead of maintaining the lighting from the original studio. This information is obtained by alternating flashes of two different color gradient patterns, which the system uses to infer the reflective properties of the subject during video reconstruction.

Having recently published the research via Association for Computing Machinery, the team is presenting its findings at SIGGRAPH Asia 2019.

"While significant progress has been made on volumetric capture systems, focusing on 3D geometric reconstruction with high-resolution textures, much less work has been done to recover photometric properties needed for relighting," wrote the team in its abstract for the paper.

"In contrast, a large body of work has addressed relightable acquisition for image-based approaches, which photograph the subject under a set of basis lighting conditions and recombine the images to show the subject as they would appear in a target lighting environment. However, to date, these approaches have not been adapted for use in the context of a high-resolution volumetric capture system. Our method combines this ability to realistically relight humans for arbitrary environments, with the benefits of free-viewpoint volumetric capture and new levels of geometric accuracy for dynamic performances."

Image by augmentedperception/YouTube

The volumetric video segment has seen steady growth over the past two years. In addition to Microsoft's Mixed Reality Capture Studios, Sony has established its own studio, while Verizon acquired Jaunt for its dive into volumetric capture technology.

The use cases for volumetric video in augmented reality experiences, both now and in the future, are plentiful. The New York Times has demonstrated how volumetric video can enhance augmented reality content for immersive storytelling, while 8th Wall has extended support for the format to its web-based AR platform.

Meanwhile, Magic Leap's acquisition of Mimesys and its real-time holographic video calling technology shows how volumetric video capture will change how people will communicate in the near future.

But Google's Creative Lab, in collaboration with the Opera Queensland, has devised yet another use-case: virtual opera. The prototype experience, which debuts at SIGGRAPH, features three performers captured via the Relightables system.

With Google's contribution, the additional realism of 3D content from Relightables volumetric video will make it even harder to distinguish AR from reality.

Just updated your iPhone to iOS 18? You'll find a ton of hot new features for some of your most-used Apple apps. Dive in and see for yourself:

Cover image via augmentedperception/YouTube

1 Comment

I downloaded and reviewed the paper and found myself very impressed with the parts that I understood. This is undoubtedly a major accomplishment.

Not clear about the application space where the productized version will get used - VR games is an obvious area and major Hollywood-type motion picture studios is another. The cost - both computationally speaking as well as actual funds - will be non-trivial. In contrast, there are a few systems such as Tatavi that while significantly less advanced than what Google team has built, seem to be a lot more portable and possibly less intensive/expensive.

Share Your Thoughts

  • Hot
  • Latest