Apple iPhone X Users Create Animoji Karaoke, but for Future AR, This Is Bigger Than Singing Pigs & Unicorns

Nov 6, 2017 10:12 PM
636455741948415050.jpg

Any sufficiently cool new technology will be immediately repurposed to do something even cooler. Such is the case with Apple's iPhone X and its Animoji feature, which has led to something completely unanticipated: Animoji karaoke.

The best example of this new pastime being enjoyed by iPhone X owners come from Australia. Sydney's Mia Harrison recorded and edited her own Animoji lip-sync version of Queen's "Bohemian Rhapsody," and it is a glory to behold.

The clip only lasts for about a minute, but in that time, we see emoji versions of a cat, a chicken, a pig, and a fox all edited together to create a hilarious take on the pop-rock classic.

But Harris isn't alone. From a rabbit delivering Rakim's classic "Microphone Fiend," to a pig laying down the vocals to Daddy Yankee's "Gasolina," to an exquisitely choreographed rendition of En Vogue's "Hold On," led by a unicorn, Animoji karaoke is a hit.

First demonstrated by Apple as a way to send cute voice messages to friends, what seems like a fairly obvious use of Animoji was kicked off by someone with an early advantage, a tech reporter with a pre-release review unit who decided to upload his own Animoji lip-syncing demo.

For those unfamiliar with this particular function on the iPhone X, it's facilitated by what Apple calls its TrueDepth camera system, which is located in the top "notch" portion of the smartphone.

The TrueDepth system is comprised of a flood illuminator, a proximity sensor, an ambient light sensor, a dot projector (which beams a 30,000 point mapping grid onto your face), an infrared camera (to capture distortions in that grid, thus capturing a three-dimensional shape of your face), a traditional 7 megapixel front camera and microphone.

The system works with Apple's new A11 Bionic chip to allow users to pull off up to 50 different facial muscle movements (replicated by the selected Animoji character), shoot selfie's in Portrait mode, and, of course, use Face ID.

The feature is so fun to use that some Apple fans are saying that it almost justifies the $1,000 it costs to get your hands on the device.

But beyond Animoji karaoke, what Apple's TrueDepth system also hints at is that the public is ready to begin augmenting reality in real-time through their smartphones — as long as it's fun and flawless. Although Animoji is (for now) solely focused on tracking the user's face, it's not hard to imagine AR-focused Apple flipping its sensor array around to track the rest of the world in the not too distant future.

In fact, one startup design executive, Evi Meyer, the CEO of 3D design app maker uMake, came up with the perfect next-gen concept look at what could possibly be the future of TrueDepth — the sensor/camera array embedded in a pair of glasses.

Because the earliest underpinnings of Apple's sensor system were born at PrimeSense (which Apple acquired a few years ago), an early partner of Microsoft in rolling out its Kinect system, the idea that Apple would flip its TrueDepth sensor array around to begin tracking objects in the real world is not far-fetched at all.

But until that magical day when Apple (hopefully) finally rolls out a pair of AR glasses, Animoji seems like the perfect way to get mainstream consumers used to the idea of layering digital elements over the real world to create completely new experiences, singing pandas and all.

Cover image via Mia Harrison/Twitter

Comments

No Comments Exist

Be the first, drop a comment!