Search Unity

Labs Spotlight: Unity Mars

October 2, 2019 in Technology | 10 min. read
Share

Is this article helpful for you?

Thank you for your feedback!

Unity Mars is a new Unity toolset specifically designed to help our creators make better spatial experiences and games that can run anywhere in the world. It has two key parts: a Unity extension and companion apps for phones and AR head-mounted displays (HMDs). We are at a fascinating inflection point for computers. The rise of ubiquitous sensors and fast processors means that we can finally move towards the spatial computing vision that has been described and tested in various forms since the 1960s. At last, we have a variety of small, flexible computers that can take in information about the world – and do something interesting with it. Unity has long been used to make digital worlds for games and simulations, but we could only experience them through a window, using peripherals to have our avatars run around vast worlds while we sit on our couches. Virtual reality (VR) took us closer by allowing people to step into the window. Mixed reality (MR) lets digital objects step through to the other side of that window, out into the real world. A whole class of creators is learning how useful Unity can be for creating mixed reality experiences. We can use all of the same systems we built for digital worlds – animation, physics, and navigation, for example – to test and build apps that run in and respond to the real world. But as the Unity Labs’ Authoring Tools Group dug into the use cases for AR/MR, we realized that not only do apps need to work in the real world, but we also need to get more information about the real world back into the Editor. We need easy ways to tell Unity what's real and what’s not and to let us design, develop, and test our real-world applications more easily. Enter Unity Mars, a new Unity toolset specifically designed to help our creators make better spatial experiences and games that can run anywhere in the world. Unity Mars is an authoring environment for creating intelligent mixed and augmented reality experiences. It has two key parts: a Unity extension and companion apps for phones and AR head-mounted displays (HMDs). We announced Unity Mars at Unite Berlin in 2018. As we near beta release, we wanted to share an overview of the toolset and new features we’ve been building since the initial announcement.

Key Unity Mars features and the problems they solve

The simulation view

The Simulation View is one of the most significant new features of Unity Mars. One curious property of MR/AR apps is that there are two world spaces that need to be defined: the Unity world space and the real-world space. The simulation view provides a place to lay out objects and test events in a simulated real-world space. It's a new dedicated window in the Editor that lets you input real or simulated world data, like recorded video, live video, 3D models, and scans, and start laying out your app directly against this data. This window includes tools and UI to see, prototype, test, and visualize robust AR apps as they will run in the real world. The simulation view is straightforward to explain, but developing it required us to create a complex system to address what we’ve dubbed “the simulation gap.” The simulation gap is the difference between the perfect information computers have about digital objects, and the reality of current devices and sensors that can only detect partial, imperfect data. We solve this in a variety of ways, from simulated discovery of surface data to our robust query system. We’ll delve deeper into these in an upcoming blog post.

The simulation device view

The simulation device view is the flip side of the simulation view. As well as simulating the world in the Unity Editor, you also simulate a device moving around the world. This perspective lets you quickly experience your app the same way most of your users will on a mobile AR device. Not only will this help you start to see if your AR app works well across different spaces without requiring you to physically test in each one, this view will also significantly reduce iteration time as you build your AR apps. You can control the camera like a device using your keyboard and mouse, or a device running the companion app, streaming real data into the Editor.

New ways to describe real-world concepts

Unity Mars has a series of new constructs to let us describe, reason about, and visualize real-world objects in our workflows. Conditions define objects, and multiple objects define scenarios. We start with Conditions, which describe individual characteristics we’re looking for: an object’s size, its GPS location, its shape, and so on. Then, we define a proxy, described as a set of Conditions. For example, to describe a table, we could use Conditions for Surface Size (eg. “this surface is at least 1x1 meter”), Alignment (“This surface is horizontal”), and Elevation (“This object is at least a meter off the ground”).

It’s important that these Conditions be fuzzy and tolerant enough to handle the variation of spaces that users will be in, so many spatial conditions are defined by a minimum and maximum range (for example, “This surface is between 1x1 meters and 3x3 meters”). These spatial conditions draw scene gizmos, which let you visualize and tweak the range in the Editor.

While a lot of these examples are about the size, height, geolocation, and other spatial properties of objects, Conditions don’t have to be spatial. For example, you can define a Condition for time of day or the weather (“This content should only happen at noon on sunny days”).

To create more complex and specialized behavior, we can string these proxies together into groups to describe larger scenarios. Say you want to create an AR video streaming app which puts playback controls on your coffee table, the video library on your bookshelf, and the virtual screen on the biggest wall in the room. You start by defining each of those proxies (table, bookshelf, wall), then group them all into this description of a room containing multiple real objects.

Of course, at any stage in these descriptions of reality, you have to consider that the objects you’re describing may not exist in the user’s environment. For example, if a user doesn’t have the bookshelf in the previous example, you don’t just want your app to fail, but instead to gracefully adjust to a simpler set of requirements. For this, we provide Fallback events, where you can define a second-best scenario, and then a worst-case scenario (for example, the user only found a single surface). This layering of “Ideal → Acceptable → Minimal" lets you balance deeply contextual behavior in the best-case scenario, where the user has carefully mapped an environment that generally resembles what you designed the app for, with functional behavior in the worst case, where the user is in a very unexpected environment and/or hasn’t scanned much.

In summary, Conditions describe individual properties; a set of Conditions describes a proxy; a set of proxies describes the whole environment you expect, or pieces within it.  With these elements, you can describe an “Ideal → Acceptable → Minimal” layering of states for your app.

You can also define specific characteristics that match to your underlying tech stack with Trait Conditions, or named properties. Depending on the device or software you’re using, you can name anything from semantic room objects to 3D markers to positioning anchors. We’ve kept this as flexible as possible so that it can work well with any upcoming world-data technology, as well as AR Foundation's supported property types. Today, we use Traits like “floor,” “wall,” or “ceiling,” but looking ahead, these traits open the vast possibility of recognized objects (“cat,” “dog”) and properties (“wood,” “grass”). When it comes to this field, each month brings exciting new updates, and we need to make sure we’re able to support all of them.

Once you’ve defined what you’re looking for and where your objects should go, you might want to get more granular about object placement. We’ve created a system of Landmarks, which let you be more precise about where objects should be placed and oriented on a matched Real World Context.

Advanced data manipulation

Reasoning APIs are an advanced feature: scripts users write that can interface with the entirety of Unity Mars’ world understanding at once, rather than one piece of data at a time. This allows you to make advanced inferences and combine, create, and mutate data. A classic example of this is being able to infer the floor as being the lowest, largest plane you can find after scanning the space. Some devices, like the HoloLens, give you a floor by default, but other sensors and tech stacks do not. The Reasoning API lets you mix and match input to come up with even better real-world understanding and more interesting events. Importantly, this code stays out of your application logic.

The companion apps

As much work as we’ve done putting the real world into the Unity Editor, we’d be remiss if we didn’t take advantage of the portable devices that work well with real-world data in space. That’s why we’ve created Unity Mars companion apps for AR devices. The first iteration is for mobile phones: you can connect the app to your project in the Unity Cloud, then lay out assets as easily as placing a 3D sticker. You can create conditions, record video and world data, and export it all back straight into the Editor. It’s another step in closing the loop between the real world and the digital.

Next steps

Alpha access

Unity Mars is currently in closed alpha, but we’re looking for dedicated teams to partner with who are trying to push the bounds of spatial applications. We want people to battle-test Unity Mars and help us prioritize our own roadmaps by giving us feedback on the tools and features that would help them the most. We’ve put down the foundations, but we want to make sure we’re building the right thing so that you can create amazing experiences.

Acknowledgments

The Unity Mars project has been inspired and informed by our hardware and software partners at Microsoft, Magic Leap, Google, Facebook, and many other companies working at the frontiers of what’s possible: from location-based virtual experiences to automotive visualizations, space simulators, architectural previz, innovative mobile AR games, and more. To all of the companies we’ve talked to and partnered with, many thanks, and a special thanks to Mapbox for co-building the first geospatial integration. We’re also building on prior work from our own Mixed Reality Research Group, and collaborating closely with the XR Platforms team. Their AR Foundation and the XR Interaction Toolkit have provided a solid tech base on which to build Unity Mars. If you’re interested in learning about Unity Mars and staying up to date as we move towards a wider release, please sign up and check out our new Unity Mars webpage. Unity Mars is now available. Get started today.

October 2, 2019 in Technology | 10 min. read

Is this article helpful for you?

Thank you for your feedback!

Related Posts