Search Unity

Ziva for feature animation: Stylized simulation and machine learning-ready workflows

December 16, 2022 in Industry | 18 min. read
Ziva for feature animation: Smooth simulation and machine learning-ready workflows | Hero image
Ziva for feature animation: Smooth simulation and machine learning-ready workflows | Hero image
Share

Is this article helpful for you?

Thank you for your feedback!

Hey there! I’m Brian Anderson, a senior solutions engineer working on Unity Art Tools. I specialize in 3D character work while collaborating on the design and feature set of tools such as Ziva VFX.

In this blog, I’ll share some best practices and workflows for bipedal feature animation and stylized assets using Ziva technologies. More specifically, I’ll talk about Ziva VFX and the potential benefits of a simulation-based pipeline.

Let’s jump in.

Simulation at speed

As artists, we all want our 3D characters to be compelling to audiences. And a big part of that involves great deformation. Yet, until recently, being able to represent the organic and dynamic properties of soft tissues in a believable, automated way hasn’t really been possible.

As someone who’s worked on rigging characters for years, I can definitely say that I’ve never wanted to spend more time than necessary fixing problems with body deformations that shouldn’t have been there to begin with. Working with simulation is an excellent alternative to the standard methods of shape correction, as it gives us a solid base layer to start building from and adding in our art direction.

This is where Ziva VFX comes in: It addresses the core challenge of getting characters to move in more believable and compelling ways, without having to make manual adjustments for general body deformations.

Ziva VFX is used for deformation and simulation of characters and creatures in games, television, and film.
Ziva VFX is used for deformation and simulation of characters and creatures in games, television, and film.

By using a simulation-based workflow, character artists can significantly reduce the need for time-consuming artisan shot sculpting or corrective shapes. Instead, Ziva VFX leverages the laws of physics – and, from science and engineering, the Finite Element Method – to achieve high-performance dynamics.

Ziva VFX at a glance

Ziva VFX is a simulation tool that works as an Autodesk Maya plug-in. It’s used to create digital characters like humans, or fantasy creatures such as dragons, dinosaurs, and giant sharks, across games, film, and TV.

The technology allows you to replicate the effects of physics and simulate most soft tissue material; things like muscle, fat, and skin. This way, you can create more lifelike, believable characters with greater control, speed, and precision.

The ZivaVFX user interface with Maya can be seen on the far right-hand side.
The Ziva VFX user interface with Maya can be seen on the far right-hand side.

Why use Ziva VFX?

Contrary to traditional visual effects (VFX) projects, character deformations aren’t typically simulated for feature animation. Instead, they’re laboriously hand-sculpted and art directed during the asset-building phase.

Visually speaking, the microforms and detailed muscle effects often seen in simulation results can be distracting in feature animation. Keep in mind that they don’t usually match the design language of the project.

What is the workflow like?

Let’s look at what it means to apply simulation techniques to a stylized character.

First, there will be different goals, such as keeping the forms simple with clear silhouettes and clean lines. Second, you’ll go on to shift the steps of the character pipeline. To do this, take the simulation from the end of the pipeline and move it to the front – the asset-building stage in preproduction. Next, use this work to build a simulatable character, and finally, harvest the data to a Pose Space Deformer (PSD), or use machine learning (ML) to take full advantage of the simulation system.

With the simulation system, you can add automation to generate more corrective and combination shapes than could be done by hand, raising the base level of deformation quality. Through this process, Ziva VFX aims to improve the overall workflow for a higher-quality, higher-fidelity base level of deformation that goes beyond the usual starting point of a skin cluster.

So how do we benefit?

Instead of spending valuable time fixing broken deformations, character artists can focus on the art direction, and in doing so, hopefully end up with a better character onscreen.

Here’s how to make this happen.

Creating a stylized simulation with Ziva VFX

Let’s use the original character design below – in the style of a feature animation, inspired by my time at Blue Sky Studios – to break this all down.

A simple Maya wrap deformer attaches character geometry to tissue objects so they can deform together.
A simple Maya wrap deformer attaches character geometry to tissue objects so they can deform together.

Begin with the Control Rig, then build and bind the bone geometry to the rig, model tissues around those bones, and simulate the tissues. You can add a simple Maya wrap deformer to attach character geometry to tissue objects so that they can deform together.

Once they’re attached, you can harvest that data. The best way to do this is through machine learning. It enables you to take advantage of the deformation system and get a multitude of shape and combination corrections.

You can also extract shapes to a Pose Space Deformer and add them directly to a rig system. This works well due to the consistency of simulation. All the shapes come from a single system, with one set of rules. When extracting those corrective shapes from that system and putting them back together using PSD, the combination will work nicely together.

The character build

Next up, let’s take a look at the character build. To create it, I started with a simple box modeling, low-poly workflow. Once I figured out the design, I added in the details to further flesh it out. What follows is prep for rigging work.

The surface topology has been redesigned to production standards to get a deformation-friendly model, and the fingers and toes are spread to aid the simulation. As a result, the character’s default pose is in a precollision state. This is also helpful when a wrap deformer is eventually used to attach the mesh to the sim tissues in order to make sure the bind has plenty of room.

 The surface topology was redesigned to production standards for a clean, deformation-friendly, fully-rigged model.
The surface topology was redesigned to production standards for a clean, deformation-friendly, fully-rigged model.

The rig was made using mGear, which is an open-source rigging system. Due to its separate hierarchy for joints, it can be used in game engines and Ziva RT (Ziva’s machine learning software still in development).

I created it with a Control Set for feature animation. The skinning was complete with ngSkinTools and converted back to a Maya skinCluster afterward. There are some helper joints, but mainly for twist in the arms and legs.

The rig has a separate hierarchy for joints so that it’s game-engine friendly
The rig has a separate hierarchy for joints so that it’s game-engine friendly.

Preparing for simulation

Begin by building the simulation inputs. The bone objects are geometry inputs – they’re going to drive the simulation. Here, I’ve modeled the shapes from some simple primitives, using a stop-motion-inspired approach for the bone design instead of an anatomically correct skeleton.

As you can see in the images, I’ve adjusted the bones, leaving gaps at many of the connection points. This is done to allow for a greater range of motion during the simulation, and to help avoid some collision problems we could potentially face.

Leaving space at major intersections allows the simulation more room to work.
Leaving space at major intersections allows the simulation more room to work.

I also added floating bone caps to the elbows and knees for better control of the silhouette. This is crucial for shaping bent limbs during the simulation.

When it comes to tissue, which is what will be simulated, each tissue is driven by the established bone geometry. The deformation, however, is going to be controlled completely by the simulation.

The tissue models are segmented by body portions and adjusted to fit under the outer skin and around the bones. Separating the tissues this way won’t impact the simulation result, but it will make it easier to organize the scene. If you want to copy, paste, save, and load parts, you can do that conveniently by section.

Tissues are segmented in a modular design, by body zones, for better management.
Tissues are segmented in a modular design, by body zones, for better management.

You can turn off the evaluation of some tissues in order to speed up the simulation. By segmenting the body, you’ll see associated simulation components more easily, just by selecting a section of the character.

Looking inside, it’s apparent that the tissue objects are modeled to be hollowed out around the bones. It’s important that you do this carefully and avoid any collisions with the bones.

Tissue objects are modeled to be hollowed out around the bones: It’s important to do this cleanly and avoid any collisions with the bones.
Tissue objects are modeled to be hollowed out around the bones: It’s important to do this cleanly and avoid any collisions with the bones.

Now that inputs are modeled, you can bind the bone geometry to the joints of your rig. This is relatively quick and straightforward, but it’s also an iterative process, meaning that you’ll want to test the simulation and make adjustments as needed.

Now you’re ready to test the bone design by exercising different areas of the rig, examine the simulation result, and go in to edit the bone models. This will likely occur multiple times throughout testing, which brings us to body simulation.

Body simulation time

When looking at the simulation components, you can see the tetrahedral meshes that deform the tissues and some of the attachment points that are holding the simulation together. One thing to note is that this example uses the QuasiStatic Integrator from Ziva VFX. 

Simulation components
Simulation components

The goal is to solve for the deformation of tissues, but not for the secondary physical effects like the jiggle that comes from fast motion. Those effects are going to be eliminated using the QuasiStatic Integrator, but can be achieved later with a shot-by-shot simulation pipeline (if required). That’s exactly the kind of behavior you want to use the simulation for: extracting blendshapes or machine learning.

Below, you can see the leg rotating 130 degrees forward in isolation, with some heavy deformation taking place on the pelvis tissue. You can tell that the deformation is holding up well, and there is now a sense of structure that comes through, as the character deforms.

There are also proper collisions and clean geometry throughout the deformation. The gaps in the bone armature are helping to extend the rotation range by leaving open space for the simulation to work.

130-degree forward leg rotation in isolation, with heavy deformation on character’s pelvis tissue

As you can see, the lower-body deformations are pushing all the way up into the chest as the volume spreads naturally over the torso. In the midsection, you can see some of the sliding attachments at work, where the tissues are sliding up along the spine and compressing into the chest. This is a good example of how they should appear.

Setting it in motion

Once in motion, you can see some of the elements coming through to match the style that the character is trying to attain. It has simple forms, clean lines, and silhouettes, as well as that nice volume preservation.

 In motion, you can see some of the goals coming through to match the style.
In motion, you can see some of the goals coming through to match the style.

However, you might also notice some other things: the sense of physicality, the broad deformation falloff, and a super clean, deformed surface, especially around those highest deforming areas, such as the hip.

About the hand simulation

The hands of the character are short and thick, which makes for a difficult use case. Rather than following the rules of an anatomically correct skeleton, I’ve chosen to remove some of the bones and minimize the crowding to give the simulation more room to work. This gives the hand that soft, squishy behavior.

The hands of the character are short and thick, which makes for a difficult use case.
The hands of the character are short and thick, which makes for a difficult use case.

The hand’s bone design is split into two main sections: the palm and the fingers. The palm is similar to the outer skin, leaving the base of each finger to act as an anchor point for the simulation. The fingertip, meanwhile, is just a floating bone with nothing in the middle.

When posing the hand, the position and orientation of the fingers are locked between bone areas, and the transition is handled by the simulation.
When posing the hand, the position and orientation of the fingers are locked between bone areas, and the transition is handled by the simulation.

You can now see how the bones influence the tissues, and how they look when in motion. On the finger, there is a smooth result over the transition area from the fingertip toward the palm. The tetrahedral mesh, where I’ve painted some custom maps to increase the resolution in higher deforming areas, is also noticeable in this image.

The bones influence the tissues when moving the character’s hand.
The bones influence the tissues when moving the character’s hand.

Here, the example shows a pose that demonstrates volume buildup. Through compression, you can see the entire hand change shape as the fist tightens. From the side, it has a strong profile – and from the front, a cartoony, rounded fist.

A pose that shows volume buildup by compression
A pose that shows volume buildup by compression

Pose evaluation

Hand poses affect the rest of the character more than you might realize. In the image below, it’s apparent that the finger poses are pushing down the pads of the palm into the wrist.

This impact changes the shape of the wrist, which in turn, pushes all the way up the arm into the elbow. The entire lower-arm now feels connected, and you’re getting some of those clear, organic shapes.

Poses on the hand affect the rest of the character’s deformation.
Poses on the hand affect the rest of the character’s deformation.

Simulation stats

Looking back at the entire simulation build, even with the addition of the hands and feet, the component count is quite manageable: The simulation time is less than 20 seconds per frame for the full body.

If you limit the evaluation to a focused area, like the arm (as shown above), the simulation time can get as low as 2–3 seconds per frame. This means that it’s faster to iterate on the simulation design and test locally.

Using the QuasiStatic Integrator also speeds up the simulation significantly compared to a direct iterative integrator. For this character, there’s about a 30% increase in the simulation speed when compared to using the default integrator with its dynamics.

In conclusion

To summarize, our goal is to move the simulation from the end of the character pipeline to the front of the pipeline, and use it to generate character deformations. This simulation becomes a library of how the character moves.

You can harvest pose data either by machine learning the simulation directly, or by extracting shapes to a PSD corrective system. Character artists can then add a sparse layer of art direction shapes using traditional methods, instead of having to correct for the entire body’s deformations manually. Once that shape data is in the rig, everyone usually benefits from the result.

Materials, grooming, and cloth have clean surfaces to work from, so rigging and character artists get to spend much more time on art direction instead of fixing broken deformations. Plus, the animation department gets an asset with high-fidelity deformation and corrections that work well together as a cohesive system.

Using machine learning on these types of systems leads you to capture all kinds of cool deformation effects; the kind of effects that cross over all of the interacting zones of the body, that might otherwise be difficult to incorporate into a character.

Watch my full SIGGRAPH 2022 presentation, which goes into more detail on the techniques used in simulation for stylized characters. Or, if you’d like more information about Ziva VFX, contact an expert.

December 16, 2022 in Industry | 18 min. read

Is this article helpful for you?

Thank you for your feedback!

Related Posts