The following blog post was written by Jasin Bushnaief of Umbra Software to explain the updates to occlusion culling in Unity Pro 4.3.
Unity 4.3 includes a plethora of improvements. One of the completely re-implemented subsystems is occlusion culling. Not only has the interface been simplified and the culling runtime itself revamped, a number of new features have also been added.
In this series of three posts, I’m going to go over how the new occlusion culling system works in Unity 4.3. This first post goes through the basics of how occlusion culling is done and what the basic usage is like with the user interface. The second post focuses on best practices to get the most out of occlusion culling. The third and final post focuses on some common problem scenarios and how to resolve them.
But let’s start with some basics. Occlusion culling refers to eliminating all objects that are hidden behind other objects. This means that resources will not be wasted on hidden stuff, resulting in faster and better-looking games. In Unity, occlusion culling is performed by a middleware component called Umbra, developed by Umbra Software. The UI, from which Umbra is controlled in Unity, can be found in Window -> Occlusion Culling, under the Bake tab.
Umbra’s occlusion culling process can be roughly divided into two distinct stages. In the editor, Umbra processes the game scene so that visibility queries can be performed in the game runtime, in the player. So first, Umbra needs to take the game scene as its input and bake it into a lightweight data structure. During the bake, Umbra first voxelizes the scene, then groups the voxels into cells and combines these cells with portals. This data, in addition to a few other important bits is referred to as occlusion data in Unity.
In the runtime, Umbra then performs software portal rasterization into a depth buffer, against which object visibility can be tested. In practice, Unity gives Umbra a camera position, and Umbra gives back a list of visible objects. The visibility queries are always conservative, which means that false negatives are never returned. On the other hand, some objects may be deemed visible by Umbra even though in reality they appear not to be.
It’s important to realize that, while this system appears similar to what was shipped with previous Unity versions, the entire system has been basically rewritten. A lot has changed for the better, both internally and externally!
How to Use Umbra
There are obviously a few considerations in getting the best out of occlusion culling. Ideally, you’d want the least conservative result as fast as possible. There are, however, tradeoffs involved. The more accurate (i.e. the least conservative) results you want, the higher-resolution data you need to generate. However, higher-resolution data is slower to traverse in the runtime, yielding slower occlusion culling. If occlusion culling requires more frame time than it saves by culling, it obviously doesn’t make a whole lot of sense. On the other hand, very quick culling isn’t of much help if only a few objects are culled. So it’s a balancing act.
The way Umbra lets you control this balance is by having you define a couple of bake parameters. The parameters determine what type of input the bake process should expect and what type of data is generated. In the runtime, using Umbra is as simple as it gets. If you’ve baked occlusion data and the camera has occlusion culling enabled in the Inspector, Unity will use Umbra automatically.
The input is controlled using the smallest hole parameter. When voxelizing the occluder geometry, smallest hole maps almost directly to the voxel size. This means that if your geometry contains intentional holes, gaps or cracks that you wish to see through, using a smallest hole smaller than these is a good idea. On the other hand, a lot of the time the geometry contains lots of unintentional cracks that you do not wish to see through. A reasonable voxel resolution will patch these up. It may help to think about smallest hole as the “input resolution” of the bake.
Note that setting smallest hole into a ridiculously small value means that baking will be unacceptably slow and/or take up a monumental amount of memory in the editor. In some rare cases, it may even cause the bake to fail due to insufficient memory. Then again, while using a larger value will be faster and more memory-friendly, it may cause Umbra to not see through things like grates or fences. So bigger isn’t always better either. In general, a smallest hole as large as possible without visible errors is desirable. In practice, we’ve found that values between 5 cm to 50 cm work fairly well for most games where the scale is “human-like”. The default value in Unity is 25 cm, and it’s a good starting point.
While smallest hole mostly deals with what type of input geometry you have, smallest occluder determines what kind of output data is produced. In essence, you can think about smallest occluder as the output resolution of the data. The larger the value, the faster it is to perform occlusion culling in the runtime, but at the cost of increased conservativity (false positives). The smaller the value, the more accurate results are generated, but at the cost of more CPU time. Obviously higher-resolution data will mean a larger occlusion data size as well.
So as the name implies, a small value means that very fine features are captured in the occlusion data. Under the hood, this directly maps to how large cells Umbra creates. Lots of small cells mean lots of small portals between them, and naturally it’s more expensive to rasterize a large amount of small portals than vice versa.
The effects of changing smallest occluder can be seen in the picture below. Note how the depth buffer, which is essentially what Umbra sees, loses detail as smallest occluder increases.
In most games, keeping smallest occluder slightly larger than the player, so around a few meters, is a good default. So anywhere between 2 and 6 meters may make sense if your game’s scale isn’t microscopic or galactic. The default value in Unity is 5 meters.
Perhaps the most difficult parameter to grasp is called backface threshold. While in many cases you don’t really need to change it, there are some situations in which it may come in handy to understand how it affects the generated data.
First, it’s important to note that the parameter exists only for a single purpose: occlusion data size optimization. This means that if your occlusion data size is OK, you should probably just disregard backface threshold altogether. Second, the value is interpreted as a percentage, so a value of 90 means 90% and so on.
OK so what does backface threshold actually do then? Well, imagine a typical scene that consists mostly of solid objects. Furthermore, there may be a terrain mesh whose normal points upwards. Given such a scene, where do you want your camera to be? Well, certainly not underneath the terrain, that’s for sure. Also, you probably don’t want your camera to be inside solid objects either. (Your collision detection normally takes care of that.) These invalid locations are also ones from which you tend to “see” mostly back-facing triangles (although they may of course get backface-culled). So in many cases it’s safe to assume that any location in the scene, from which the camera sees a lot of back-facing triangles, is an “invalid” one, meaning that the in-game camera will never end up in those locations.
The backface threshold parameter helps you take advantage of this fact. By defining a limit of how much back-facing geometry can be seen from any valid camera location, Umbra is able to strip away all locations from the data that exceed this threshold. How this works in practice is that Umbra will simply do random-sampling in all cells (see the previous post) by shooting out rays, then see how many of those rays hit back-facing triangles. If the threshold is exceeded, the cell can be dropped from the data. It’s important to note that only occluders contribute to the backface test, and the facing of occludees doesn’t bear any relevance to it. A value of 100 disables the backface test altogether.
So, if you define the backface threshold as 70, for instance, to Umbra this means that all locations in the scene, from which over 70% of the visible occluder geometry doesn’t face the camera, can be stripped away from the occlusion data, because the camera will never end up there in reality. There’s naturally no need to be able to perform occlusion culling correctly from underneath the terrain, for instance, as the camera won’t be there anyway. In some cases, this may yield pretty significant savings in data size.
It’s important to stress that stripping away these locations from the occlusion data means that occlusion culling is undefined in these locations. “Undefined”, in this context, means that the results may be correct, incorrect (pretty much random) or return an error. In the case of an error, all objects are simply frustum culled.
Of course in some cases, there just happens to be some amount of back-facing geometry in valid camera locations too. There may be a one-sided mesh that has been, possibly erroneously, tagged as an occluder. If it’s a large one, it may cause the backface test trigger in nearby areas, resulting in culling artifacts (=errors). This is why the default value of backface threshold in Unity is 100, meaning the feature is disabled by default.
Feel free to experiment with the parameter. Try reducing the value to 90, which should drop a lot of data underneath terrains for example. See how it has any noticeable effect on the occlusion data size. You can go even lower if you want. Just remember to do so at your own risk. If you start popping into rendering artifacts, increase the value back to 100 and see if it fixes the problems.
To be continued…
In the next post, I’ll go into some best practices and recommendations for how to get optimal results out of occlusion culling. Please visit www.umbrasoftware.com for more information about Umbra.