All facets of your Unity project pass through the lens of a camera. That’s why the camera’s position, motion, and timing matter as much as the elements related to render pipelines – think light, shadow, focus, color, and so on.
Cinemachine is Unity’s virtual camera operator used to capture precise actions at the right time and place. Made for both games and animations, 2D and 3D alike, Cinemachine ensures that the right actions and emotions are carefully captured by the lens.
[UPDATED November 2021 to reflect the latest developments in Cinemachine 2.6]
As part of the 2020 LTS, Cinemachine 2.6 is now a verified package (more on what that means below). We also have several new tools for cinematic creators like you to make the most of this latest version. Read on to learn more.
In case you missed it, check out our Cinemachine talk part of Unite Now – Unity’s free series of online learning sessions, demos, creator stories, and more.
While no feature can claim to cover every single use case, verification indicates that we’ve considered several key use cases to better prioritize bug fixes and provide ongoing support. Our Cinemachine team takes this commitment to users seriously, so please don’t hesitate to flag any issues or ask for assistance on our forums.
We’re pleased (and a tiny bit embarrassed) to open with this astonishingly simple but useful workflow improvement: Whenever you create a new virtual camera, we drop it in wherever your Scene view is.
Placing the camera exactly where you are in the Scene makes blocking and framing shots much easier and more intuitive, speeding up your overall creative workflow.
An important effect for achieving realism in digital camera work hinges on actions that jostle the camera. In the real world, we tend to worry about it the other way around, using tools like a Steadicam to dampen this effect. But when the connection between the environment and the viewer’s perspective is lacking, the virtual world can seem stiff and unresponsive.
Here are a couple of ways to connect camera motion to the physical action in the Scene: The first involves the Impulse, which consists of a Noise Setting (a pattern for generating the shake), an Impulse Source (the location that creates a disturbance), and an Impulse Listener (the thing affected by that disturbance). Note that the listener doesn’t have to be a camera, so you can use this system to disturb other objects as well (as in the example below). For Cinemachine 2.6, this tool includes an added layer of realism and propagation speed, which allows you to modulate the timing from when a disturbance occurs and when it affects the listener.
For example, you can start by keeping the propagation speed low to see the effect more clearly, and then ramp it up to show effects similar to that of a realistic explosion.
If you downloaded the samples, you’ll find the following sample scene in your project: Assets > Samples > Cinemachine > 2.6.11 > Cinemachine Example Scenes > Scenes > Impulse > ImpulseWave. (Note that the version number might have changed, depending on when you read this).
One major issue with games like third-person shooters is the slight, though significant disconnect between the perspective of the character and the player’s camera. If you’ve ever lurked around a corner or waited behind an obstacle for a clear moment to attack, you’ve likely experienced this type of disconnect while playing. But in games where the camera is well managed, you might not find it so problematic.
In Cinemachine 2.6, we’ve added the following sample scene: Assets > Samples > Cinemachine > 2.6.11 > Cinemachine Example Scenes > Scenes > AimingRig > AimingRig, which depicts a solid setup for achieving AAA-quality visibility for this common use case
The video above shows how to construct such a rig with help from the 3rd Person Follow behavior and a Cinemachine 3rd Person Aim component. The component references a sprite as the reticle that denotes where the character is looking, and uses a raycast to position that reticle. In the video, you can see the difference between the camera’s perspective (the larger circle) and where the character is looking (the smaller one). It’s a minor offset, but it can be critical when a player is trying to judge aim.
Even more, the Cinemachine 3rd Person Aim component provides filters for what the raycast should ignore, as well as a distance setting to limit how far the ray is cast.
Another common use case involves driving the position of the camera while following a GameObject – like a tank, a car, or an avatar – and at the same time, chasing another object that represents a point of interest, such as the tank’s aim. As with the aiming rig, we’ve set up a reticle, but this one does some fancy indirect chasing of the mouse within a set of limits. There’s also a virtual camera following the reticle. Check out the video below.
The example is available at Assets > Samples > Cinemachine > 2.6.11 > Cinemachine Example Scenes > Scenes > DualTarget > DualTarget. As with the other samples, note that the version number might have changed.
This example illustrates how you can use indirection with Cinemachine to create complex behavior. There are a few moving parts at play here:
The camera following the Player object follows wherever the Player goes, with appropriate offsets. Similarly, the Lookat behavior ensures that the camera looks at the reticle. Note that the Dead Zone and Soft Zone parameters are identically set almost wide open, so the reticle can move freely within the screen space without causing any motion. The camera reacts only when the reticle gets close to the edges.
Point At Aim Target, the script on the gun, does what it says on the tin: It makes the gun point at the target.
So the only somewhat complicated piece is the Move Aim Target script. As was the case with the third-person animation setup, we have references to a sprite (for the reticle) as well as information about how far to raycast, and which layers and tags to filter from the cast. We’ve added additional details for tuning the reticle behavior, including input dimensions to drive the behavior.
Camera magnets sound like a new feature, but really they're just a refined example of using weights in a CinemachineTargetGroup. This use case, suggested by a user on our forums, is fairly common: your camera wants to follow a target (your avatar, perhaps) until it gets close to a point of interest. As you approach that point, you want the camera to gravitate towards the point of interest. Let’s watch the video and then we’ll explain how it’s done.
This time, our virtual camera is not focused on a single object, but rather on a Cinemachine Target Group. Target Groups are an incredibly useful and versatile abstraction in Cinemachine. Instead of pointing a camera at one thing, a group allows you to point it at multiple things. This comes in handy when keeping a spread-out party of adventurers all in the shot.
But each target in the group also has a weight, so you can bias the camera toward objects with weights greater than others. In this example, we demonstrate how playing with that weight allows us to dynamically “magnetize” objects as we approach them.
Two simple classes make this magic happen: Camera Magnet Target Controller (attached to the Camera Magnet GameObject), and Camera Magnet Property (attached to each of the Camera Magnet GameObjects). Each Camera Magnet Property sets a proximity and strength to control its weight in the Target Group. The Camera Magnet Target Controller then loops through all of the magnets.
Added into the mix is the player avatar itself, which has a constant weight value of one. If there are no magnets attracting the camera, the avatar “owns” the camera’s attention. But as the player approaches, the magnets pull strongly, drawing the attention toward these key positions.
Smoother and more efficient than before, Lookahead provides an enhanced view of where the target is going. The new implementation has an almost human feel to it, so be sure to take advantage of this feature.
While it was always possible to use Cinemachine with custom or third-party input systems to control the camera, doing so was sometimes a little awkward. You had to override the global delegate CinemachineCore.GetInputAxis(string name) to point to your custom input provider function. You can still do that, but it’s no longer the most efficient approach.
Now there’s a new interface Cinemachine.AxisState.IInputProvider. If a virtual camera has a behavior that implements this interface, then that behavior will be queried for input instead of the standard CinemachineCore.GetInputAxis(string name) delegate.
What are the advantages of this? Well, because the interface call is not string-based, it calls a per-virtual-camera implementation with its own data. This way, it can be mapped to specific players.
Cinemachine 2.6 ships with a sample implementation of this for Unity’s new input system. You can either use it directly or as a template for your own custom input providers.
If you have the new Input System package already installed, then you will be able to add the new CinemachineInputProvider component to your virtual camera. With it, you can map your camera’s input to an Input Action Reference and support Multiplayer requirements.
Let’s dive into some other new tools that will help you capture the perfect image and tell your story.
The new Cinematic Studio Sample groups together a set of features for an incredible in-Editor experience. These features include:
You can find the Cinematic Studio Sample on the Asset Store.
We also released two companion apps on the App Store to help artists leverage mobile AR data in their cinematic and animation pipelines. The companion apps connect to the Unity Editor through the new Live Capture package.
You can find more information on getting started with Live Capture and the companion apps on our forum here.
Additionally, Unity Face Capture lets you preview and record real-time facial performances in the Unity Editor. This simplifies the process of adding realistic face animation to your characters, saving you hours of time and effort. We’re talking about the direct capture of facial expressions and head movements to control a character in real-time.
More specifically, Unity Face Capture offers you the ability to:
Then there’s Unity Virtual Camera, an intuitive tool for virtual cinematography that enables you to preview and record natural handheld camera movements and lens controls in an app, while harnessing the power of the Unity Editor.
Use Unity Virtual Camera to:
We want to know what works for you, and how we can do even better. If you have any questions, feedback, or feature requests that can help guide the future of Cinemachine, please keep us posted on the forums.