Search Unity

The making of Enemies: The evolution of digital humans continues with Ziva

December 2, 2022 in Industry | 13 min. read
The making of Enemies: The evolution of digital humans continues with Ziva | Hero image
The making of Enemies: The evolution of digital humans continues with Ziva | Hero image
Share

Is this article helpful for you?

Thank you for your feedback!

This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.

From The Heretic’s Gawain to Louise in Enemies, our Demo team continues to create real-time cinematics that push the boundaries of Unity’s capabilities for high-fidelity productions, with a special focus on digital humans.

The pursuit to create ever more realistic digital characters is endless. And since the launch of Enemies at GDC 2022, we have continued our research and development into solutions for better and more believable digital human creation, in collaboration with Unity’s Graphics Engineering team and commercially available service providers specializing in that area.

At SIGGRAPH 2022, we announced our next step: replacing the heavy 4D data playback of the protagonist’s performance with a lightweight Ziva puppet. This recent iteration sees the integration of Ziva animation technology with the latest in Unity’s graphics advancements, including the High Definition Render Pipeline (HDRP) – all with the aim of further developing an end-to-end pipeline for character asset creation, animation, and authoring.

Along with the launch of a new strand-based Hair Solution and updated Digital Human package, the Enemies real-time demo is now available to download. You can run it in real-time and experience it for yourself, just as it was shown at Unite 2022.

Gawain, Unity’s first digital human, as featured in the 2019 demo, The Heretic.
Gawain, Unity’s first digital human, as featured in the 2019 demo, The Heretic

What’s new in Enemies?

Animation with Ziva

While the cinematic may not appear too different from the original, its final rendered version shows how the integration of Ziva technology has brought a new dimension to our protagonist.

Ziva brings decades of experience and pioneering research from the VFX industry to enable greater animation quality for games, linear content production, and real-time projects. Its machine learning (ML)-based technology helps achieve extraordinary realism in facial animation, and also for body and muscle deformations.

To achieve the level of realism in Enemies, Ziva used machine learning and 4D data capture, which goes beyond the traditional process of scanning actors in 3D scans. The static, uneditable 4D captured facial performance has now been transformed into a real-time puppet with a facial rig that can be animated and adjusted at any time – all while maintaining high fidelity.

Our team built on that 4D capture data and trained a machine-learned model that could be animated to create any performance. The end result is a 50 MB facial rig that has all the detail of the 4D captured performance, without having to carry its original 3.7 GB of weight.

This technology means that you can replicate the results with a fraction of the animation data, creating real-time results in a way that 4D does not typically allow.

In order to achieve this, Unity’s Demo team focused on:

Creating the puppet

  • To create this new version of Louise, we worked with the Ziva team. They handled the machine learning workflow using a preexisting 4D data library. Additional 4D data was collected from a new performance by the original Enemies actor (we only needed to collect a few additional expressions). This is one of the unique advantages of our machine learning approach.
  • With this combined dataset, we trained a Ziva puppet to accurately reproduce the original performance. We could alter this performance in any way, ranging from tweaking minute details to changing the entire expression.
  • Using the 4D data capture through machine learning, we could enable any future performance to run on any 3D head by showing a single performance applied to multiple faces of varying proportions. This makes it easier to expand the range of performances to multiple actors and real-time digital humans for any future editions.

The puppet’s control scheme

  • Once the machine learning was completed, we had 200–300 parameters that, when used in combination and at different weights, could recreate everything we had seen in the 4D data with incredible accuracy. We didn’t have to worry about a hand-animated performance looking different when used by a group of different animators. The persona and idiosyncrasies of the original actor would come through no matter how we chose to animate the face.
  • As Ziva is based on deformations and not an underlying facial rig, we could manipulate even the smallest detail because the trained face uses a control scheme that was developed to take advantage of the fidelity of the machine-learned parameters/data.
  • At this point, creating a rig is a relatively flexible process as we can just tap into those machine-learned parameters – this, in turn, deforms the face. There are no joints in a Ziva puppet, besides the basic logical face and neck joints. 

So what does this all mean?

There are many advantages to this new workflow. First and foremost, we now have the ability to dynamically interact with the performance of the digital human in Enemies.

This allows us to change the character’s performance after it has already been delivered. Digital Louise can now say the same lines as before, but with very different facial expressions. For example, she can be friendlier or angrier or convey any other emotion that the director envisions.

We are also able to manually author new performances with the puppet – facial expressions and reactions that the original actress never performed. If we wanted to develop the story into an interactive experience, it would be important to expand the possibility of what the digital character reacts to, such as a player’s chess moves, with nuances of approval or disapproval.

For the highest level of fidelity, the Ziva team can even create a new puppet with its own 4D dataset. Ziva also recently released a beta version of Face Trainer, a product built on a comprehensive library of 4D data and ML algorithms. It can be used to train any face mesh to perform the most complex expressions in real-time without any new 4D capture.

Additionally, it is possible to create new lines of dialogue, all at a fraction of the time and cost that the creation of the first line required. We can do this either by getting the original actress to perform additional lines with an HMC and then using the HMC data to drive the puppet, or by getting another performer to deliver the new lines and retargeting their HMC data to the existing puppet.

The original performance from Enemies applied to a different puppet, as shown at SIGGRAPH 2022

At SIGGRAPH Real-Time Live! we demonstrated how to apply the original performance from Enemies to the puppet of another actress – ultimately replacing the protagonist of the story with a different person, without changing anything else.

This performance was then shown at Unite 2022 during the keynote (segment 01:03:00), where Enemies ran on an Xbox Series X, with DX12 and real-time ray tracing.

New Unity features

To further enhance the visual quality of Enemies, a number of HDRP systems were leveraged. These include Shader Graph motion vectors, Adaptive Probe Volumes (APV), and of course, hair shading.

Enemies also makes use of real-time ray tracing in HDRP and Unity’s native support for NVIDIA DLSS 2.0 (Deep Learning Super Sampling), which enable it to run at 4K image quality, comparable to native resolution. All of these updated Unity features are now available in Unity 2022 LTS.

Strand-based Hair Solution

The brand new strand-based Hair Solution, developed during the creation of the Enemies demo, can simulate individual hairs in real-time. This technology is now available as an experimental package via GitHub (requires Unity 2020.2.0f1 or newer), along with a tutorial to get started.

By integrating a complete pipeline for authoring, simulation, shading, and rendering hair in Unity, this solution is applicable to digital humans and creatures, in both realistic and stylized projects. The development work continues with a more performant solution for hair rendering enabled by the upcoming Software Rasterizer in HDRP. We are also diversifying the authoring options available by adopting and integrating the Wētā Wig tool for more complex grooms, as showcased in the Lion demo.

Digital human package

Expanding on the technological innovations from The Heretic, the updated Digital Human package provides a realistic shading model for the characters rendered in Unity.

Such updates include:

  • A better 4D pipeline
  • A more performant Skin Attachment system on the GPU for high-density meshes
  • More realistic eyes with caustics on the iris (available in HDRP as of Unity 2022.2)
  • A new skin shader, built with the available Editor technology
  • Tension tech for blood flow simulation and wrinkle maps, eliminating the need for a facial rig

And as always, there is more to come.

Discover how Ziva can help bring your next project to life. Register your interest to receive updates or get early access to future Ziva beta programs. If you’d like to learn more, you can contact us here.

December 2, 2022 in Industry | 13 min. read

Is this article helpful for you?

Thank you for your feedback!

Related Posts