Tech Demo Shows the Real Power of Quest Pro Face Tracking
Quest Pro’s face-tracking capabilities will be quickly put to use to make Meta’s avatars more expressive, but next-gen avatars stand to benefit much more from the new tech. One of Quest Pro’s big new features is a face-tracking system that uses internal cameras to sense the movement of your eyes and some parts of your […]
Quest Pro’s face-tracking capabilities will be quickly put to use to make Meta’s avatars more expressive, but next-gen avatars stand to benefit much more from the new tech.
One of Quest Pro’s big new features is a face-tracking system that uses internal cameras to sense the movement of your eyes and some parts of your face. Combined with a calibration-free machine learning model, the headset takes what it sees and turns it into inputs that can drive the animation of any avatar.
Key Quest Pro Coverage:
Quest Pro Revealed – Full Specs, Price, & Release Date
Quest Pro Hands-on – The Dawn of the Mixed Reality Headset Era
Quest Pro Technical Analysis – What’s Promising & What’s Not
Touch Pro Controllers Revealed – Also Compatible with Quest 2
In the near-term, this will be put to use with Meta’s existing avatars. And while it certainly makes them more expressive, they still look somewhat goofy.
This is likely the result of the current Meta avatar system not being built with this level of face-tracking in mind. The ‘rigging’—the underlying animation framework of the model—seems not quite fit for the task. Grafting Quest Pro’s face-tracking inputs onto the current system isn’t really doing justice to what it’s actually capable of.
Luckily Meta has built a tech demo which shows what’s possible when an avatar is designed with Quest Pro’s face-tracking in mind (and when almost all of the headset’s processing power is dedicated to rendering it).
Yes, it’s still a bit shaky, but every movement you’re seeing here is being driven by the user making the same motions, including things like puffing out the cheeks or moving the mouth from one side to the other. On the whole it’s a much more complete representation of a face that I’d argue manages to avoid entering into the uncanny valley.
I got to try this demo for myself in my recent hands-on with Quest Pro where I looked into the mirror and appeared as this character (which Meta calls Aura). I came away really impressed that, even with no special calibration, the face I saw in the mirror seemed to mimic whatever motions I could think to make with my face.
I was especially drawn to the detail in the skin. If I squinted and scrunched up my nose I could see the skin around it bunch up realistically, and the same thing when I raised my brow. These subtle details, like the crease in the cheeks moving with the mouth, really add a lot to the impression that this is not just an object in front of me, but something that’s got a living being behind it.
Whether or not the expressions actually look like me when I’m the one behind the mask is another question. Since this avatar’s face doesn’t match my own, it’s actually tough to say. But that the movements are at least plausibly realistic is a first important step toward virtual avatars that feel natural and believable.
Meta says it will release the Aura demo as an open source project so developers can see how they’ve attached the face-tracking inputs to the avatar. The company also says developers will be able to use a single toolset for driving humanoid avatars or non-human avatars like animals or monsters without needing to tweak every avatar individually.
Meta says developers will be able to tap a face-tracking API that uses values corresponding to FACS, a well recognized system for describing the movement of different muscles in the human face.
This is an effective system not only for representing faces, but it also forms a useful privacy barrier for users. According to Meta, developers can’t actually get access to the raw images of the user’s face. Insead they get a “series of zero-to-one values that correspond with a set of generic facial movements, like when you scrunch your nose or furrow your eyebrows,” Meta says. “These signals make it easy for a developer to preserve the semantic meaning of the player’s original movement when mapping signals from the Face Tracking API to their own character rig, whether their character is humanoid or even something more fantastical.”
Meta claims even the company itself can’t see the images captured by the headset’s cameras, either internal or external. They are processed on the headset and then immediately deleted, according to the company, without ever being sent to the cloud or to developers.