Search Unity

A look ahead at spatial computing with Owlchemy Labs

April 9, 2024 in Engine & platform | 10 min. read
Gameplay from the game Job Simulator, by Owlchemy Labs
Gameplay from the game Job Simulator, by Owlchemy Labs
Topics covered
Share

Is this article helpful for you?

Thank you for your feedback!

We’re entering the new era of spatial computing, where robust extended reality (XR) tools and flexible workflows can enable developers to add interactions, scale graphics, prototype, and test in-Editor. In the 2024 Unity Gaming Report, we predict greater demand for XR games, and many of the contributing studios agree with this forecast.

Unity’s senior advocate Antonia Forster sat down with Andrew Eiche, Owlchemy Labs’s CEOwl, to get his perspective on the future of spatial computing and practical tips on developing for Apple Vision Pro.

Antonia Forster: Hi Andrew, thanks for joining me. Let’s start by looking ahead. What do you see as the future for VR and spatial computing? 

Andrew Eiche: One of the biggest things that we’re moving into is using XR devices as a general spatial computing environment for domain-specific tasks, with Apple Vision Pro and the changes to Meta’s operating system leading the way. We’re trying to solve the paradigm of how to do a generic workload in VR versus an extremely specific one. 

What does it look like when we actually want to work in XR? We’re trying to take existing tasks and transfer them to an identical paradigm in a spatial environment. Hopefully developers will settle in quickly and we will be able to understand what the breadth and depth of this media is. 

This is very important for us to do so we can discover the utility and intuitiveness of the technology. In this industry, platforms fall along a spectrum of these markers, and those with the most potential for adoption land in the high utility and high intuitiveness quadrant. With VR, we want to ensure that it’s moving in that direction – highly useful and highly intuitive, the quadrant where phones, PCs, and smart TVs fall into.

Looking into the future, thinking about spatial computing helps make VR more useful, but we still need to work on its attainability. How do we do this? We change the primary input vector to align with the platform that players focus on the most: mobile. From there, we need to focus on removing friction with implementing hand tracking, making headsets lighter, and getting better optics.

Gameplay from Job Simulator, by Owlchemy Labs
Job Simulator, by Owlchemy Labs

You spoke about the evolution of technology. What other tech trends do you think might impact XR in the next few years? 

Gaussian splatting is incredible, and I think that the next step for it is going to be figuring out better capture and animation. We solved the wrong problem with three-dimensional capture, where we assume that if we just cover the space and cameras or use light fields, it would be great, but there’s something that just works better, like a transparent Gaussian. I think we’re going to see a huge push into that and in figuring out how we are going to optimize it.

I also think AI is going to have an influence. One of the really interesting use cases I’m waiting for is when we don’t have to render the whole frame, just part of it. What if we render 30% and then we kick it off to a Tensor Processing Unit (TPU), and it just fills it in, based on all the data it has before and after? Suddenly, the graphics chip that’s sitting in our headset is now PC-capable. It’s literally how the reflections work for NVIDIA RTX™, so we are already walking down that road. 

There’s also AI filling in the gaps of weight painting, or seeing generative AI potentially replace a best-fit algorithm as a tweening. A best-fit algorithm has pieces to build with, and if the optimal fit is halfway between the pieces, using generative AI to move the slider halfway is interesting, useful, and has artist control. This would be great for animators who want to focus on doing their key poses and not spend time tweening. AI could help with that, and then the animator can go in and clean it up.

Thank you for the insight! Based on how the XR space is trending, what is your advice for developers entering this era of spatial computing?

From an interaction design standpoint, you need to break down the way you interact with something and not try to fit a square peg into a round hole. It’s tempting for developers to jump in and get deep into it, but as a new developer, I recommend approaching spatial computing slowly. Take your time stacking the right building blocks. 

For example, when porting Job Simulator, we started by thinking about the right times to use operating system-level interactions. When we put up a SwiftUI window for the Apple Vision Pro version, we debated when to use pinch. We really followed how Apple uses it because they’re extremely specific about when to use it and what to use it for. 

When you’re not interacting with a window, you’re interacting with a 3D object. At this point, you need to stop thinking about it like an app on a 2D monitor and more like a physical product design for a real-world object. Design objects in an intuitive way, following principles of real-world object design. Make sure that you continuously test your user experience, and realize that the only thing that counts is when the game is actually being tested with real users in a real space, on a device. You need to do the work. 

It’s key for you to have it in your hands, and have others interact with it. I recommend allocating plenty of time for modification. The experience can feel different on different platforms. Specs can differ and it’s important to be flexible.

Lastly, what’s special about VR is how to explore it. Our version of exploration includes sitting at a desk that has closed drawers and getting to rummage through it. It’s incredibly interesting to pick up each object and see how it works and interacts. One of the key reasons that players prefer this interaction is because we’re putting things in their hands and allowing them to really mess with the world around them, and find out what that world is like. We are not making them interact with something that is far away or that they’re disconnected from.

Gameplay from Job Simulator, by Owlchemy Labs
Job Simulator, by Owlchemy Labs

Getting more granular on tips for spatial computing, what advice would you give to developers looking to port or develop games for Apple Vision Pro?  How has your experience been using Unity’s visionOS support while porting Job Simulator?

We’ve been working closely with Unity and Apple, and aligning on the best way to bring our hopes and vision to life. We got Job Simulator running really quickly on Apple Vision Pro and used it similarly to building in iOS. One of the things that took some time to work out was making it a fully immersive game. Unity had to call a function that would communicate our desired output to the Apple operating system. Prior to doing so, we kept encountering a flat window, and if you closed it, the game was over. 

We were developing for a fully immersive game, and being a general computing operating system, exiting a game is new to us. Building for PC, we never had the second step of quitting the application, since the player can just hit the X. When we put it on Quest, it was really binary and the game either ran, or did not. Suddenly, on Apple Vision Pro, we were on this device where the game can go into the background, and we needed to do the work to figure out how to actually leave the application.

My advice is to be really collaborative and open. You never know when someone will have the fix to a bottleneck you’re experiencing. It’s not only good for you, but the community as a whole. We’re extremely active on the Discussions forums, and in opening tickets and speaking with Unity. It helps us find solutions that also benefit the rest of the community. Submitting bug reports there has provided us with the opportunity to work with other devs who are in similar situations. It definitely speeds up our learning curve and is instrumental in helping us move development forward.

I’d love to end this interview with one last nugget of inspiration and insight for our community. What’s the most valuable thing you learned about visionOS development that you will take to your next Apple Vision Pro project?

We’ve existed in two ecosystems for years – Windows PC and Android. In moving to development for visionOS, which shares many similarities with other Apple operating systems, we learned in what spots we had made assumptions and leaned a little bit harder on the operating system in a potentially incorrect way. We figured out where we could have done better. 

Another key insight to keep in mind is the value of Facetime and sharing your screen to show other people what you’re experiencing – say, for debugging. That screen is your application running, your code running, and others can see your view perfectly. This is something that is notoriously hard for other headsets to do, and Apple Vision Pro does it effortlessly. That would be my fast tip.

Want to know more? Read our new 2024 Unity Gaming Report, and check out this video playlist, where expert creators discuss this year’s biggest game development trends. 

If you’re ready to dive into Unity’s support for Apple Vision Pro, you can get tips from other devs developing for Apple Vision Pro and share your feedback on the AR/VR/XR discussion forum.

April 9, 2024 in Engine & platform | 10 min. read

Is this article helpful for you?

Thank you for your feedback!

Topics covered
Related Posts