Get a behind-the-scenes look at a project made with Unity from Varjo, who used photogrammetry and dynamic lighting to create a realistic and lifelike environment in virtual reality (VR).
The applications of photogrammetry – the process of using multiple photos of real-world objects or spaces to author digital assets – run the gamut. Photogrammetry has not only gained traction in the gaming world, but also the industrial market.
For instance, point clouds generated by photogrammetry have become integral to architecture, engineering, and construction (AEC) workflows. And across automotive, transportation, and manufacturing, capturing a physical prototype via photogrammetry and comparing it to its digital CAD model ensures vision matches reality.
To better simulate real-world environments and showcase the potential of photogrammetry for professional use, the Varjo team recently completed a photogrammetric scan of the largest cemetery in Japan and showed it as a digital twin in VR. We invited them to share in their own words how they tackled this ambitious project.
With Varjo VR-1, shown above, exploring the finest details of buildings, construction sites or other spaces is for the first time possible in human-eye resolution VR. 20/20 resolution expands the use cases of photogrammetry VR for industrial use.
To illustrate the potential of dynamic, human-eye resolution VR photogrammetry, we at Varjo created a dynamic demo of one of Japan’s holiest places, the Okunoin Cemetery at Mount Koya. In this article, we explain how it was done.
This section was written by Jani Ylinen, 3D Photogrammetry Specialist at Varjo.
Photogrammetry starts with choosing the proper capture location or target object. Not all places or objects are suitable for photogrammetry capture. We chose to do a capture from an old cemetery in Mount Koya in Japan because we wanted to do something culturally significant in addition to having lots of details to explore in the demo. Since this was an outdoor capture, the conditions were very challenging to control. But here at Varjo we like challenges.
The key challenges in this capture were:
When taking the photos of the photogrammetry scene, a general rule is that each picture should overlap with the neighboring picture at least 30% or more. The main goal is to take photos of the target from as many angles as possible and keep the images overlapping.
The area captured in Koyasan was scanned similarly than if one would scan a room. For this scene, about 2,500 photographs were taken.
This section was written by Juhani Karlsson, Senior 3D Artist at Varjo and a former Visual Effects Artist at Unity.
Photogrammetry delivers realistic immersion but often its static lighting narrows down the realistic use cases. We wanted to use dynamic lighting to simulate a realistic environment. Unity provides a great platform for constructing and rendering highly detailed scenes, which made it easy to automate the workflow.
While shooting the site, file transfers were constantly made so we could save time in the 3D construction. First, we used a software called Reality Capture to create a 3D scene of the photographs.
The 3D scene was exported from Reality Capture with a single 10 million polygon mesh with a set of 98 x 8k textures.
In Houdini, the mesh was run through Voronoi Fracture that splits the meshes into smaller and more manageable-sized pieces. Different levels of detail (LOD) were then generated with shared UVs. This was done to avoid texture popping between LOD.
That way, the textures were small enough for Unity to chew and we could get the Umbra occlusion culling working. It was also lighter to generate UVs when the pieces were smaller.
Shader was created to bake different textures. Unity’s De-Lighting tool requires at least albedo, ambient occlusion, normal, bent normal, and position map. Most frame buffers are straightforward to bake out of the box but bent normals are not so obvious. Luckily, bent normals are the direction of missed occlusion rays, and there is a simple VEX function called occlusion() that basically outputs bent normals.
We created a Python script to automatically run the textures through the batch script provided by the Unity De-Lighting tool.
If the scan has a lot of color variation, the De-Lighting has trouble estimating the environment probe. Therefore, we decided on a mixed approach where we mixed between automatic De-Lighting and traditional image-based lighting shadow removal.
A Unity Asset post-processing script was made to import the processed models. It handled the material creation and texture assignment. A total of 128 of 4k textures were processed, baked, and de-lighted.
Once the scene was imported, it was just a matter of dragging the VarjoUser Prefab to the scene. Instantly, the scene was viewable with VR-1, and we could start tweaking it to match our needs.
The Unity Asset Enviro was used for the daylight-night cycle, and the real- time global illumination was baked to the scene. The generated mesh UVs were used for the global illumination to avoid long preprocessing times. The settings were set so that the lightmapper would do minimal work on the UVs. This can be done by enabling UV optimization in the meshes and adjusting settings.