Register at the NVIDIA website to attend the live webinar. It will start at 10:00 AM Pacific Time, on Wednesday, the 27th of January 2021, and will last about an hour. If you are unable to attend, you will be able to watch the session on-demand at a later time.
All live webinar attendees will be entered into a drawing for a chance to win one (1) NVIDIA® Quadro RTX™ 5000! However, you must register and attend the entire live webinar to be eligible.
For lighting scenarios, Ray Tracing consists in shooting rays from the cameras or surfaces towards other surfaces and light structures, notably outside the camera view, to generate the lighting. Film production and high-end visualization use Ray Tracing extensively. However, until recently the amount of computing power required to render such images at a reasonable framerate made such technique unusable for real-time applications. As a consequence, an alternative method has been used in games for decades: rasterization. Simply put, it consists in shading the pixels on screen by figuring out which lights affect them, and indeed it doesn’t involve the concept of Ray Tracing at all and has several limitations due to its screen-space nature.
Thankfully, with the democratisation of hardware-accelerated Ray Tracing found in the latest mainstream GPUs, Ray Tracing may soon become the new norm to generate lighting, especially on higher-end platforms. The High Definition Render Pipeline (HDRP) offers a hybrid Ray Tracing pipeline, by mixing traditional rasterized and Ray Tracing techniques. It offers Ray Tracing versions of common lighting effects, such as ambient occlusion (AO), reflections, global illumination (GI), subsurface scattering, and shadows.
For example, have a look at this impressive showcase of the 2019 BMW 8 Series Coupe, a fruit of the collaboration between Unity, NVIDIA, and the BMW. It demonstrates how real-time Ray Tracing can deliver photorealistic results, at a fraction of the time and cost of offline rendering solutions.
Ray tracing in HDRP is currently in Preview, which means it isn’t necessarily ready for production. However, you are more than welcome to experiment with it and give us feedback on the forums.
In its current version, the HDRP template only uses rasterized techniques to render the lighting, via baked lightmaps, Light Probes Groups, Reflection Probes, Shadowmaps, etc. Thus, the first step of the webinar will consist in quickly converting the template to take advantage of the Ray Tracing.
Later, I will then present in more detail 4 of the main Ray Tracing effects available in HDRP, that is Ray Traced Ambient Occlusion, Reflections, Global Illumination, and Shadows. Finally, I will end the session with HDRP’s Path Tracing, a more brute force approach to Ray Tracing, that provides even more visual fidelity at the cost of vastly increased rendering time.
The screen-space ambient occlusion (SSAO) has been a staple of real-time rendering for games for more than a decade. It is used to simulate the environment’s diffuse occlusion, in order to improve visual contact between objects in the world and darken the lighting in concave areas. However, this effect, when pushed too far, can produce halos around geometries, and even a cartoony look. On top of it, one of the main drawbacks of this screen-space technique is its inability to generate occlusion from objects which reside outside the frame, as it only relies on the depth information available in the z-buffer. On the plus side, this effect is still great at handling micro occlusion of small areas in the camera’s perspective, for a relatively low-performance cost.
Hopefully, thanks to Ray Tracing, rays can be shot at surfaces beyond the camera frustum, and therefore they are able to reach objects located outside the frame. This way, you can get great macro occlusion from large objects located all around the camera. Although technically AO is only a rough approximation of environment lighting, it can complement other lighting techniques such as lightmaps or lightprobes, whose resolution or density is limited and therefore unable to capture micro occlusion.
In a similar fashion to SSAO, screen-space reflections (SSR) can only reflect objects located in the frame: again, surfaces that aren’t immediately visible to the camera cannot be reflected. For instance, looking at the floor will result in the SSR technique being unable to provide any useful information. Therefore, SSR is very approximative, and this technique tends to have many detractors, including yours truly, as a good placement of static Reflection Probes can often provide more appealing and less distracting results for most static scenarios. However, one area where SSR shines literally is when dealing with planar reflections for surfaces parallel to the view direction, such as floors, walls, and ceilings. An optimal use-case for SSR would be a camera whose pitch is locked, such as in a racing game.
With Ray Tracing, however, we are able to get access to information that resides outside the screen, and as a consequence, we can offer a more exact reflection of the world, at least within a certain radius around the camera, defined by the Light Cluster and the length of the rays.
One of the most impressive features of Ray Tracing is the ability to generate real-time global illumination, that is the simulation of indirect lighting, or simply put, the lighting bouncing in the environment.
Typically in game engines, the indirect lighting is handled with pre-computed or baking techniques, such as light probes or lightmaps, and they can greatly slow down the iteration time of artists and designers dealing with the lighting.
Thankfully, HDRP offers 2 techniques for RTGI: a Performance and a Quality one. The former is geared towards high frame rate scenarios in direct light, whereas the second one can provide very accurate results in more complex interiors thanks to multiple bounces and samples, for a very high computational cost nonetheless.
Out of the box, when using the High shadow filtering quality (PCSS), HDRP provides great looking shadow maps that simulate the natural smoothness of shadows, while ensuring they remain sharp near the shadow casters, like in real life. However, when using the cheaper Medium filtering quality, results can be underwhelming, as the entire shadow map is filtered uniformly, regardless of the distances between casters and receivers.
Results can be improved dramatically with Ray Traced shadows, by shooting rays from surfaces towards the lights to figure out the amount of occlusion between them. This can therefore provide an extremely realistic approximation of the shadowing, for a moderate performance cost. In addition, HDRP supports transparent shadows!
Finally, path tracing lets artists generate great image quality significantly faster than traditional offline renderers. Rays are shot from the Camera and whenever they hit a surface, we shoot yet other rays towards other surfaces and lights (i.e. the Light Cluster Structure). The journey of the rays between the camera and the lights is called a path, hence the name Path tracing.
The advantage of path tracing over the other Ray Tracing methods mentioned above is that it provides a unified process to generate all the lighting, such as shadows, reflections, refraction, and global illumination. The main downsides to this technique are rendering time and noise. However, for the latter, we can accumulate samples over several seconds to achieve less noisy results.
Hopefully, after watching this upcoming webinar, you will have a much better understanding of the key Ray Tracing features available in Unity, and you should be able to greatly increase the visual quality of your visualizations and even real-time games.
Pierre Yves Donzallaz (Technical Art Manager, R&D, Graphics) is an experienced lighting artist with over a decade of AAA experience in the field of real-time rendering. He has a strong technical and artistic background and specializes in lighting, level beautification, UX, tools design, and workflow improvements.
He is currently a member of Unity’s R&D Graphics team, where he leads fellow technical artists whose mission is to improve artists’ efficiency, educate users globally, and develop new tools, workflows, and graphical features alongside engineers and designers.
Anis Benyoub (Senior Graphics Programmer, R&D, Graphics) is currently working on extending rendering pipelines for games and real-time applications to support real-time Ray Tracing. Anis is passionate about Monte Carlo integration, physically-based rendering, and real-time performance (and loves to share his knowledge with the community).
Before Unity, he worked at Pretty Simple Games as a graphics engineer, at Autodesk as a 3D R&D engineer on 3DS Max, and then as a core software engineer on the Stingray game engine. He holds an M. Sc. in Computer Science from Ecole Polytechnique de Montréal with a focus in Computer Graphics and M.Eng degrees in Computer Science from INSA Lyon.