Search Unity

How to optimize game performance with Camera usage: Part 2

October 14, 2021 in Games | 7 min. read
Accelerate Success Logo
Accelerate Success Logo
Topics covered
Share

Is this article helpful for you?

Thank you for your feedback!

Welcome back to Part 2 of: How to optimize your game performance with Camera usage. If you haven’t read Part 1 yet, check it out here.  Now we will pick up from where we left off and dig in the profiler results!

Performance in the Built-in Render Pipeline

In the Built-in Render Pipeline, the Camera.Render profiler marker measures the time spent processing each Camera on the main thread.

 Profiler timeline view
Profiler timeline view of the main thread with two Cameras

Each Camera has its own Camera.Render marker which we summed up in each test case in the graphs below.

Time spent processing Cameras on the main thread
Time spent processing Cameras on the main thread (lower is better)

In the high load scenario, we selected a different load factor for each device to increase the total frame time without going too high. Very high loads do not provide reliable results because the cost of additional Cameras can get somewhat overshadowed by the performance variability between frames.

Time spent processing Cameras on the main thread
Time spent processing Cameras on the main thread (lower is better)

The trend is clear: Time spent in Camera.Render is directly affected by the number of Cameras. This is true even when adding an extra Camera that doesn’t render anything in the fourth scene.

Performance in the Universal Render Pipeline

When moving to the Universal Render Pipeline (URP), the first thing that stands out in the profiler timeline view is that many bars are blue (Scripts) instead of green (Rendering). Green bars represent time spent in the C++ side of the Unity engine’s rendering code which is where all the rendering code lives in the Built-in Render Pipeline. URP is a scriptable render pipeline which means that a lot of the rendering code has been moved to C# to give users much more control to customize the rendering process.

The Inl_UniversalRenderPipeline.RenderSingleCamera profiler marker measures the time spent processing each Camera on the main thread. Conveniently, each of those markers also contains the name of the Camera as a suffix.

Profiler timeline view of the main thread with two Cameras
Profiler timeline view of the main thread with two Cameras

However, summing up those Camera markers as in the Built-in Render Pipeline tests does not give an accurate picture of the total performance of URP. As can be seen in the figure above, we should also count the significant time spent in the Inl_Context.Submit markers. This is time spent on the creation of draw command buffers which are included in the Built-in Render Pipeline’s Camera.Render markers. To make things easier, we choose the RenderPipelineManager.DoRenderLoop_Internal marker which encompasses all this.

Time spent processing Cameras on the main thread
Time spent processing Cameras on the main thread (lower is better)

For consistency reasons, we used the same high load factors as in the Built-in Render Pipeline scenarios.

Time spent processing Cameras on the main thread
Time spent processing Cameras on the main thread (lower is better)

Again, the trend is clear: Time spent in rendering code is directly affected by the number of Cameras. As in the Built-in Render Pipeline tests, this holds true even when adding an extra Camera that doesn’t render anything in the fourth scene.

At this point, if you closely compared the performance characteristics of the Built-in Render Pipeline and URP, you might have noticed some strange results. You would be right. In the high load tests, for example, URP is much more efficient than the Built-in Render Pipeline on the Galaxy S7 Edge, but not on the iPhone models we tested. To keep this post to a manageable length and keep the focus on the primary subject, we will investigate this in a separate blog post.

Camera usage patterns to avoid

Let’s examine some multiple Camera usage scenarios we’ve seen in the wild and discuss their alternatives.

Having a giant canvas in the middle of the scene in the Scene view can be distracting. Some users fix this problem with a separate UI Camera positioned further away which renders canvases set to “Screen Space - Camera”. Since Unity 2019, you can instead toggle child GameObject visibility in the Hierarchy window to hide distracting canvases. The Layers drop-down menu in Unity’s Toolbar can also be used to achieve this.

Hiding UI canvases using the Layers drop-down menu
Hiding UI canvases using the Layers drop-down menu

Some users rely on Cameras to order their canvases. This is not the right tool for the job. Instead, use the Canvas’ Sort Order or Plane Distance. However, be also aware that nested canvases have an “Override Sorting” option which must be taken into account.

Another case we’ve seen is using separate Cameras that render different parts of the game UI with culling masks for the sake of toggling the visibility of UI screens. The correct way to do this is to instead toggle either the activation of GameObjects or the enable flag of Canvas components.

One last example that doesn’t involve UI is using multiple Cameras to switch between viewpoints. The worst situation then arises when all those Cameras are enabled and the Camera rendering order (i.e. the Depth property) is used to control which one is visible. In that case, all Cameras are rendered in full one over the other which is very costly. Disabling unused Cameras removes this cost. However, we argue that it’s best to use a single Camera and always position it at the currently active viewpoint. This makes it impossible to have multiple active Cameras by mistake and simplifies the Camera management process.

When to use multiple Cameras

While you should avoid the unnecessary use of multiple cameras, there are times when this is the best or even only solution. In general, multiple Cameras are likely the right choice if you need more than one of any of the following:

  • Camera outputs. This includes the rendering surface (Display, RenderTexture) and the viewport rectangle.
  • Resolutions. Only a single resolution can be used for the output of a Camera. Intermediary results inside the rendering pipeline used by a Camera can however be rendered at arbitrary resolutions and used to produce the Camera’s output. An example of this is HDRP’s Low-Resolution Transparent pass.
  • Field of view, position, and orientation. These parameters directly define the culling frustum. An exception to this is XR where Unity uses a couple of tricks for the two eyes which are Cameras very close to each other.

Here are some examples of valid use-cases for multiple Cameras.

A common practice to improve GPU performance on newer (mobile) devices with very high display resolutions is to render the scene in a lower resolution and upscale it to the final resolution. In this scenario, many games want to render at least some parts of their UI at the native resolution over the upscaled scene to get sharper UI sprites and images. This type of rendering configuration requires a separate Camera because two different resolutions are used.

Multiple sub-displays with different Camera positions or resolutions also require multiple Cameras. An example of this is a split-screen game where each player can move their viewpoint independently.

A dynamic billboard showing a part of the scene from a second viewpoint requires a RenderTexture as its own texture. A separate Camera is necessary to generate the content of this RenderTexture.

Conclusion

In this post series, we measured the cost of additional Cameras in Unity’s Built-in Rendering Pipeline and Universal Render Pipeline. The results clearly show that unnecessary Cameras in a scene have a cost that can easily be avoided for a nice performance gain.

On a closing note, you might wonder why even a Camera that doesn’t render anything can have such a large performance impact. The first main reason is that it simply takes a good amount of work for Unity to figure out that the Camera does not, in fact, contain anything. The second reason is, to put it bluntly, that Unity is not optimized for sub-optimal Camera setups. Optimizing the engine for these would make the well-configured games slower and also probably use more memory which is undesirable.

Want to learn more about the Accelerate Solutions Games team and how they can help you elevate your game? Visit our homepage, or reach out to a Unity sales representative to find out how we can help accelerate your next project. 

If you like our Accelerate Success content series, don’t miss out on this recording of our latest Accelerate Success webinar: The Unity UI makeover, delivered by two of our team leads, Andrea and Sebastian. In this demo, Andrea and Sebastian will take a poorly designed UI and give tips and best practices on how to improve the UI so your game runs faster and more efficiently. 

October 14, 2021 in Games | 7 min. read

Is this article helpful for you?

Thank you for your feedback!

Topics covered
Related Posts