Search Unity

From production to participation: The story of Drop in the Ocean

October 21, 2022 in Games | 17 min. read
From production to participation: The story of Drop in the Ocean | Hero image
From production to participation: The story of Drop in the Ocean | Hero image
Share

Is this article helpful for you?

Thank you for your feedback!

Created in partnership with Conservation International and developed with Unity by Vision3, Drop in the Ocean is a 10-minute social VR adventure – narrated by Philippe and Ashlan Cousteau – where audiences hitch a ride on a jellyfish and encounter the mysteries of the deep. Most importantly, participants experience the plastic pollution crisis from the viewpoint of sea life.

This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.

Despite the huge technological challenges of developing this project as a social VR experience, it was very important to our team that audience members were depicted as full-body avatars. When diving into Drop in the Ocean, up to four participants team up to swipe away at the plastic debris that surrounds them, gaining new insights into the threats facing our oceans and their role in becoming part of the solution. It is this message that endeared Drop in the Ocean to Unity’s Environment and Sustainability grant program, which awarded a grant to our project in 2021.

In this article, I’ll dive into the development process for Drop in the Ocean and how our team at Vision3 used Unity, as well as share what the Environment and Sustainability grant is helping us accomplish.

Behind the experience

Something magical happens when audiences know they are interacting with real people within an immersive experience. This magic is why social VR – and the technology that makes it possible – has the fantastic ability to take people to impossible places, allowing creators to craft unique stories.

Being able to share moments like this with friends and physically react to their actions in real time is a very special element of immersive entertainment. At the start of projects like Drop in the Ocean, the Vision3 team is constantly positing ways to make an experience more rewarding for the audience.

Drop in the Ocean was first launched in 2019 as a PC-VR experience, requiring a dedicated, high-end workstation for each participant over a tethered cable connection. As the project evolved, a bespoke RGB camera-based motion-capture system was developed to allow audiences in the experience to see each other as full body avatars. This camera system and high-spec local network was controlled by a separate Linux-based PC.

The motion-capture system allowed unprecedented freedom of movement with markerless tracking but was limited by the state of the art at the time. It was also reliant on fixed lighting conditions that are often incompatible with the venues that we were targeting for distribution.

In short, our team had built an experience that was extremely innovative in allowing audiences to interact with each other as fully realized avatars at the expense of huge equipment requirements and limitations on where or how the experience could be staged.

Shark still from Drop in the Ocean

Three years later, we set ourselves a very ambitious challenge: How can we retain the visual fidelity, scientifically accurate marine species models, and full body avatars in order to create a networked, location-based experience that can fit in a single travel case on mobile VR devices?

So, let’s dive in…

Dynamic resolution and optimizing shaders

While there was interest in porting the project to standalone devices, reducing costs, and increasing the scale of audiences, there was a necessity to preserve the visual quality of the PC-VR version as much as possible.

The original version of Drop in the Ocean featured a list of almost everything that presents challenges for mobile development – multiple translucent layers on jellyfish models, extremely high poly counts on large megafauna species, massive object counts in several scenes, fixed timeline build, and a large amount of physics on object interaction. VFX Developer Conrad Hughes worked as part of the porting team with a mind to keeping as much visual fidelity as possible while still getting the project running in frame. The first thing to do in this instance was to switch Drop in the Ocean from running on the High Definition Render Pipeline (HDRP) to Universal Render Pipeline (URP). While URP sacrifices some of the quality we found with the PC version, it runs massively faster and allowed us entry, essentially, onto standalone devices.

The goal was to maintain as much close-up fidelity as possible, while still preserving the sense of being under the ocean surface. To this end, we utilized global performance boosters like FFR (fixed foveated rendering) to reduce the resolution scale down from the center of the device screen. Some devices have an extremely high-resolution pair of screens – we used Multiview to instance the second eye, as standard on VR projects – which means that while the display has an incredible capacity to display great images, the hardware can struggle to push all those pixels. FFR helps a lot by keeping fidelity and quality on what you’re looking at and filling in the rest.

Unity Editor screen capture from the making of Drop in the Ocean.

We also used dynamic resolution scales throughout the project, too. When there is a lot on screen, the render scale drops, and when there are only a few hero objects on screen, it goes up. This way, when the viewer focuses on one thing, they get that thing at 100% quality. But, when they focus on a large group of objects (i.e., hundreds of jellyfish), they get a real sense of presence and a feeling of being in a big crowd.

In order to draw things like the player hands, player avatars, and a few other VFX objects, we needed access to VFX Graph. This was, in part, because the assets were originally authored using VFX Graph and utilized a few bespoke scripts. Essentially, rebuilding all the VFX would have been a huge time constraint. In order to get VFX Graph working on Open GL 3.1, we had to disable cubemap arrays in the package (a feature unsupported by GLES 3.1). Essentially, we backported the VFX Graph package from Unity 2022 to 2021 LTS and adjusted a few bits to get it working. Once it did, it proved to be very flexible for our needs. A few of the more recently implemented features cut the need for a few scripts and custom-made bits and pieces, which was helpful.

Beyond that, our developers disabled shadows and amended lighting settings to keep dynamic lighting (but not have too many lights affecting one surface at one time) in order to keep things cheap. We also staggered spawn times for things to prevent GPU hitching and went through a list of standard practice for getting games running in-frame on the hardware – reduce texture size, reduce shader complexity and expensive ALU math operations, reduce texture taps, pack textures where possible, use vertex animation in place of skinned meshes, and, where possible, make things simple. A lot of the techniques from the early console era were employed in order to get stuff running on standalone devices.

Overall, we tried to use expensive things sparingly. When we did use them, we made sure they were center stage so that no performance overhead was wasted.

Building a new approach to the network

By moving to standalone devices, we had changed how the network was configured and dramatically altered the moderation controls. Before, the dedicated PCs were constantly connected over ethernet, and now there was the challenge of monitoring connected devices over a Wi-Fi 6 networks (as well as remotely observing device battery life and ensuring that the experience has simultaneously launched to all devices successfully).

In deployment, Drop in the Ocean needed to be as robust as other visitor attractions – able to seamlessly usher in a constant stream of group audiences without interruption or recalibration. Lead Developer Gokhan Sivrikaya began by building a single bespoke solution that supported multiple types of devices and apps that can have different purposes.

This gave us the opportunity to add a moderator app that runs on a tablet, so that admins can walk around the area while being able to see the status of the devices. It also allowed us to easily add other Wi-Fi devices that work as a timer on the wall or another standalone device to give haptic feedback when there is in-game interaction.

Using Unity helped us to switch between and use the same code-base on different operating systems for different devices with minimal effort.

View of particle people inside the Drop in the Ocean experience

Using spatial anchors to determine player positions and interaction with inverse kinematics

By removing the motion-capture system, there remained a significant challenge in developing full body avatars that would sync to other players’ movements and positions. Our team used inverse kinematics to simulate the bodies of players in the game, and the positions of the players in-game needed to be calibrated with real-world positions of the people.

When started, the device we use has its own heading (forward vector) and the player's initial position can be anywhere in that coordinate system. In normal multiplayer games, where other players are somewhere far away – maybe at their homes – spawn points can be used to spawn players in the virtual world to their designated positions. For this experience, the virtual world and the real world (and the positions of the people in both worlds) should match, so that players can interact freely without bumping into each other in the real world.

To achieve this, we used spatial anchors. Every device placed anchors at the same real-world coordinates, and the devices sent the players’ relative positions from the anchors to the server. From there, the server shares those relative positions back to devices, and all devices solve this problem according to their relative position to the anchors in their virtual coordinate systems. When explained like this, it sounds complicated, but Unity really helped us with its core libraries to solve this problem.

After the grant

Picture of Drop in the Ocean's display at the Tribeca Film Festival

With the experience complete, the next challenge for us is to expand access to the project so that even more diverse audiences have the chance to be part of social VR experiences. This requires us to constantly push the technology and find solutions to make the activations nimbler and more practical, while at the same time looking at future content we develop and building on the solid foundation we have in place to find entirely new ways to bring audiences together.

Because of the Unity grant that our team received, we were able to take Drop in the Ocean on the road to be experienced by more than just our local community. From its original conception, we always targeted two ideal audiences for the project: the wider general audience that will benefit from understanding more about our oceans in a tangible way, and key decision makers within governments and business.

To continue reach the public more broadly, we are looking at ways to use our grant funds to stage Drop in the Ocean at cultural institutions and locations with high footfall – working in partnership with leading aquariums, science museums, and galleries throughout North America and internationally. This location-based iteration of the project will allow groups of 4–6 guests in each screening. As they take off their devices, they’ll be able to continue this newfound bond beyond Drop in the Ocean and have the opportunity to explore ways to join the fight to protect our oceans.

In tandem, through 2023, we will be exploring staging Drop in the Ocean at the most important environmental and climate action conferences and events, as well as supporting Conservation International’s outreach activity.

We hope that Drop in the Ocean can play a vital role in supporting global efforts toward protecting at least 30% of our oceans by 2030, through Marine Protected Areas.

Still from Drop in the Ocean

We feel proud to launch Drop in the Ocean as an agile, location-based activation. The project is now at a point where we can look at a whole range of destination locations to host it. But we are already exploring how to bring the experience to audiences who might struggle to visit one of these destinations, in particular classrooms.

With the experience successfully ported to standalone devices, we’re next looking toward an online iteration that retains everything we love about the connectivity of Drop in the Ocean but can allow audiences outside of North America and Europe to access it.

An amazing possible outcome for Drop in the Ocean would be if we could create a way for school groups from opposite sides of the world to participate in the experience together online – becoming a platform for students from different cultures to share their dreams for a healthier ocean.

We’re mindful that many of our audiences could be coming to VR for the first time, so, where appropriate, we’re also looking to remove components that might limit interaction. Something as simple as holding Phytoplankton in your hands in VR gives an audience member a sense of tangible agency, and, even better, these acts are witnessed by others as part of the Drop in the Ocean experience. This sense of social presence in the experience is designed to build the feeling of inclusivity, and that when a headset is removed, users feel they’re part of an ocean protection movement.

Future developments in immersive experiences can further this concept, and we continue to look for ways that we can make audiences feel like they created a legacy within our stories.

Hear from Adam again during the Unity for Humanity Summit 2022 presentation “Oceans and XR: Engaging, educating, and empowering marine conservation” at 4:00 pm EDT on Wednesday, November 2. Register today so you don’t miss it.

October 21, 2022 in Games | 17 min. read

Is this article helpful for you?

Thank you for your feedback!

Related Posts