For decades, flat-painted skyboxes and HDRIs have been the go-to solutions for sky and cloud rendering. Now, thanks to the brand-new Volumetric Clouds in HDRP, you can populate your worlds with dynamic clouds. For the first iteration of this system, we focused on performance and ease of use, so you can create beautiful visuals in a couple of clicks for a low GPU cost.
Until the end of the last generation of consoles, volumetric clouds have been a very rare occurrence in video games. Attempts to produce clouds that looked volumetric, i.e., that occupy a physical space in the world, were usually restricted to flight simulators and a handful of open-world games.
The main reason for this absence is the cost of rendering such clouds, as the computation of the light scattering through the clouds is an expensive operation when relying on ray tracing, or ray marching, to be specific. A second reason is that a majority of 3D games take place directly on a surface (e.g., a terrain or sea), and therefore they don’t need an extensive dynamic sky system that allows the camera to fly above the clouds.
As a result, many 3D applications have happily relied on static textures, such as cube maps, imposters, or simple transparent materials. Usually these are rendered infinitely far away from the camera, and thus they do not feel entirely connected to the world below them, yet they can provide sufficient quality for static scenarios at a very low cost.
Now, thanks to the advances in real-time 3D rendering and increased hardware performance, a new era of dynamic skies with Volumetric Clouds has begun. This opens the door for new and improved visual experiences, as well as a wider range of gameplay scenarios, out of the box in Unity’s High Definition Render Pipeline (HDRP).
Our Volumetric Clouds are influenced by the Volume Framework’s global wind, and they can cast believable shadows onto the landscape. They interact well with Volumetric Fog and Physically Based Sky to create impressive volumetric sun shafts and jaw-dropping sunsets.
Finally, the sky reflection and ambient probe, as well as local reflection probes and planar reflections, can render these clouds. This process results in a unified cloud system that interacts seamlessly with most other HDRP systems for a very moderate cost of 2ms on a PlayStation 4 (non-Pro), and well under 0.5ms on a recent high-end GPU.
Creating the Volumetric Clouds only takes a couple of clicks. First, ensure that the active HDRP Asset and the HDRP Global Settings have “Volumetric Clouds” enabled.
Then, it is just a matter of using the Volume Framework to control the Volumetric Clouds. Like most HDRP effects, simply select your existing (global) Volume or create a new one, and assign a Volumetric Clouds component to its profile. A multitude of parameters are on offer to control the type of clouds, wind, lighting, shadows, and overall quality and cost of the clouds.
In no time, you can create many types of cloudscapes thanks to the four built-in presets in the Simple mode: Sparse, Cloudy, Overcast, and Stormy. These four weather types have been tuned to maximize the level of detail under, inside, and slightly above the clouds, up to an altitude of 10,000 m (33,000 ft).
One important setup decision is related to the Local mode of the clouds because it impacts your ability to fly into and above them, as well as influences the level of performance indirectly.
For this reason, if your project does not require the camera to fly into and above the clouds, keep the Local mode disabled. You will still get the same lighting and shadowing quality while being able to keep a lower camera far plane distance.
Alternatively, if you need to fly into and above the clouds, you can of course enable the Local mode. As for the Physically Based Sky, the Volumetric Cloud system simulates the curvature of the Earth, so that the cloud dome wraps around the planet and provides a believable connection with the land or sea on the horizon line. If you curve the cloud dome further, you can force its intersection with the world at a closer distance, which means you can lower the camera far plane distance drastically.
The wind has received a particular level of attention, to reproduce the typical buoyancy seen in the real world. There are four controls over the wind. For instance, you can create static clouds yet introduce some movements into them via the Erosion Wind, or you can fully move the entire cloudscape via the Shape Wind. When all four are combined, you can create mesmerizing vistas.
In addition to the four presets available in the Simple mode, you can use a Custom preset, which lets you control most cloud properties with a handful of curves and sliders. This mode is incredibly powerful to create a wide variety of cloudscapes with multiple layers of clouds in a matter of seconds. The possibilities are endless.
To create sun shafts, simply toggle the Volumetric Fog property and set its Maximum Height and Volumetric Fog Distance to several thousand meters. This way, the directional light can influence the atmosphere and create beautiful sunbeams underneath the clouds.
Finally, the Manual mode lets expert users create cloud coverage textures and lookup tables (LUT) in order to have absolute control over the position of each vertical slice of clouds defined by the LUT. For information on how each texture influences cloud rendering, see the documentation.
The cloud Volume is driven by a 2D coverage map, and different altitude profiles set by a LUT that specifies properties such as density, erosion, and ambient occlusion. Then, two 3D noise textures are used to eat at the cloud volume to generate distinct cloud shapes:
For its rendering, the Volumetric Cloud system in HDRP uses ray marching, a technique that relies on casting rays in steps from the camera towards objects and light sources. In our case, we cast primary rays towards the cloud Volume to sample the surface of the clouds. Secondary rays are then cast towards the sun to shade the pixels on the surface of the clouds.
To reduce the cost further while minimizing the visual degradation, this process is done at quarter resolution and with a temporal reprojection and accumulation. This means that the cloud system uses samples from previous frames to build the full results. For this reason, when the camera moves at high speed and when parts of the clouds without any history must be rendered, some ghosting or reconstruction artifacts might appear. Nonetheless, we provide a greatly effective anti-ghosting solution that allows for very fast camera movements and very high wind speeds. In the animation below, ghosting is prevented while the clouds move at thrice the speed of sound.
Particular attention was also given to ensure thin geometries, such as tree leaves, do not produce any obvious artifacts, since neighboring clouds’ pixels are typically difficult to resolve.
We also spent a lot of time ensuring that sunrises and sunsets look believable. These times of day are a particular challenge due to the wide exposure difference between the landscape and the sky, the curvature of the Earth and its resulting shadow, as well as the complexity of the light scattering in low-sun scenarios, and the large distances rays need to travel.
Like any system, Volumetric Clouds perform at their best within certain limits. At the moment, they are highly optimized for situations when the camera is below, inside, and slightly above the cloud volume. For this reason, space and very high-altitude scenarios above ~12 km (40,000 ft) are not particularly well-suited to the current system, as the erosion tiling may become visible, as shown in the image below.
In addition, we currently support one cloud Volume only. This means that all the cloud shapes residing in said volume share the same noise, wind, and lighting properties. In the future, we will look into supporting multiple cloud Volumes to offer even more granular control on each layer of clouds you wish to create and animate. We will also offer a unified way to control the position of the Earth’s center for space scenarios.
Eventually, support for punctual lights and objects casting shadows onto the clouds will be added. This will be particularly useful for space and flight simulators when a ship and its strobe lights affect the surrounding clouds. In the meantime, Local Volumetric Fog can be used in certain static scenarios.
On the front of visual quality, we will look into supporting more than two levels of noise, to provide improved quality for top-tier platforms, interactive frame rates, and offline applications. Finally, transitions between cloud presets will become easier, in order to streamline the creation of dynamic time of day cycles.
We cannot wait for you to use this new cloud system in your HDRP projects with Unity 2021.2. Over the past several months, we’ve worked hard to ensure that Volumetric Clouds not only look beautiful but also perform at a very low GPU cost. Toying with very high-quality ray marched clouds with a random noise generator on high-end GPUs is one thing, but creating realistic cloud shapes for high frame rate applications on mainstream hardware is a much more complex endeavor.
We hope these Volumetric Clouds will not only raise the visual quality of your projects and the dynamism of your skies, but also open the door for new types of gameplay experiences that take advantage of the changing nature of the weather. Lighting and weather in interactive applications are often seen as static components, however considering them this way is often a missed opportunity to explore different scenarios for the audience while recycling the same environment.
For more information about the Volumetric Clouds and other rendering technologies in Unity 2021.2, such as Lens Flares and Light Anchors, you may want to watch this summer’s SIGGRAPH session. Be aware that since the publication of this talk, many improvements went into the cloud system, however, it still provides a great overview of its capabilities.
Pierre Yves Donzallaz (technical art manager) is an experienced technical artist with over a decade of AAA experience in the field of real-time rendering. He specializes in lighting, level beautification, tools design, and UX improvements. He holds a BSc in Computer Science from University of Fribourg.
He currently manages fellow technical artists whose mission is to improve artists’ efficiency, educate users globally, and develop new features alongside engineers and designers.
Anis Benyoub (senior graphics programmer) is currently working on extending rendering pipelines for games and real-time applications. Anis is passionate about Monte Carlo integration, physically based rendering, and real-time performance (and loves to share his knowledge with the community).
Before Unity, he worked at Pretty Simple Games as a graphics engineer, at Autodesk as a 3D R&D engineer on 3DS Max, and then as a core software engineer on the Stingray game engine. He holds an MSc. in Computer Science from Ecole Polytechnique de Montréal with a focus in Computer Graphics and M.Eng degrees in Computer Science from INSA Lyon.