Search Unity

Looking to the future of mixed reality (Part II)

September 21, 2017 in Technology | 8 min. read
Topics covered

Is this article helpful for you?

Thank you for your feedback!

Thoughts about design. Part I of this series summarized the current and future issues facing mixed reality and its adoption in the mainstream. In this second part of this series we explore the design challenges facing developers with the new and promising mixed reality medium.

Transition phase. We can all agree that in the near future physical screens will begin to disappear and be replaced with displays that blend into the environment (some descriptions here, here and here).  Whether it happens in 5 years as we predict, or in 10 years as Mark Zuckerberg thinks, the next vector driving the digital world will be augmented reality glasses.  With that being said, we recognize that there will be a transition period both technologically and socially, and for social reasons we need to design this next phase so that users and non-users can coexist.

The social awkwardness of wearing MR glasses will most likely force the initial uses to be in the home, the workplace or a location where all users are required to wear the glasses (such as theme parks, museums, stadiums, planes…etc.). The form factor will be a key tool to mitigate the awkwardness and drive the speed of adoption since the glasses will need to be as stylish and thin as possible in order for users to accept them into their daily routine.  In the short term, we hypothesize that the first models will complement smartphones, mostly in order to decentralize the computational burden, improve autonomy, and to minimize the impact of change on the user.  As an example, a smartphone can act as a trackpad for glasses in a workplace situation, and then revert back to a smartphone when no longer needed or when the batteries in the glasses die.

We need to think ‘Human centric’. What does the user really need? This seems like an obvious question to ask, however, the current state of MR development has a large number of MR demos whose sole purpose seems to be presenting floating objects with little actual value to the user.  Let’s be honest, no one wants to live in the dystopian future that we were warned about so brilliantly by Keiichi Matsuda in his short film “Hyperreality”.  And while MR apps need to become more relevant for the user, mixed reality doesn’t need a ‘killer app’.  For comparison, just think of the internet where there is no single killer app, but rather the internet itself is the killer app.  And just like the internet, it will be a multitude of everyday uses that will render MR devices indispensable.

Nowadays, making a useful mixed reality application is clearly a challenge because of the form factor and limitations of current technology. But in the not so distant future, user hands will be free, the glasses will know what the user is looking at, what object they are touching, how it is being handled and what the user wants to do with it. So our group is looking into the design considerations that will be relevant beyond this point, when all this becomes established.

Observing and understanding the user will allow us to develop proactive scenarios that will trigger on the user's intention in order to facilitate their life, preferably without changing their habits. Physical tech, like eye tracking, will play a big role in understanding the user’s intent, but design will be even more important in minimizing complexity and avoiding disconnecting people from the real world. Smartphones are already disconnecting people from the real world and distracting them to the point of becoming dangerous, imagine what mixed reality might do if not properly designed.  It will be crucial for safety and usability that everyone can experience their own mixed reality while remaining connected to the tangible world around them.

We also need to think “Object centric”. Although there are multiple ways of imagining MR, we should focus primarily on the awakening of matter, meaning that objects in our surroundings augment themselves to communicate with us.

We are not Tom Cruise! Staying upright and making big gestures is the opposite of our everyday interaction habits. The mouse-keyboard combo has yet to be replaced because it is the most efficient way to execute a maximum number of actions with minimum movement in a seated position. The question becomes how do we interact with digital matter if input devices such as keyboard, mouse, and gamepad disappear? The "World as a device" approach not only creates an entry point to contextual services, but also transforms each object into an interactive controller. Physical objects will inherit the attributes, interactions and behaviors of a virtual replica, a 3D asset that we call a smart object.

For example, a simple business card could give both direct access to an application like LinkedIn and be a user interface that avoids the user having to take out their smartphone or use a computer.

This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.

Concept video of a business card contextual application.

By being user and object centric, we could accomplish what David Smith tells us makes the traditional mouse & keyboard so successful: intention amplification – where small movements and gestures result in large, significant actions.  This magnifies our efforts and lowers fatigue allowing us to focus on the objects themselves rather than the movements we use to manipulate them.  By centering design around a user’s intent and using intelligent objects that respond to that intent by modifying themselves, our thought process is not limited by physical laws, but rather will allow us to achieve a new freedom.  Plus we don’t “pollute” reality with noisy floating gizmos or unnecessary UI.

The world as a playground.  With these principles of interaction it will be very easy to gamify our environment.  For example, to gamify any particular object, it will be sufficient to modify the 3D object within Unity, which would then propagate the game behaviors to the real world object. Imagine gamifying the world on both a smaller personal scale and on a larger more public scale; something we refer to as micro and macro interactions respectively:

For micro-interactions:

  • Instead of using a pen on the back of a cereal box to trace the path of a character through a labyrinth, we could use the box itself and have the user pitch and yaw the box to create movement.

This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.

  • Playing board games can be transformed by adding interactions to both the board and cards.

For macro-interactions:

  • Imagine an Escape Room where tipping a specific book in the library triggers the opening of a dimensional portal in the middle of the room.
  • Or touching water from a real life fountain to regain life points during life-size role-playing games.

Illusion and disillusion. Contrary to appearance, augmented reality is far from being a new technology. The literature and research on the subject are rather dense. However, the revival of virtual reality has given a boost to technological innovations that will soon allow us to fill the gaps in augmented reality and produce advanced mixed reality solutions that ultimately make alternative realities indistinguishable from the real world.

Unfortunately, at the moment, this is far from being the case. There is a gap between what people perceive through the many demo videos posted on social media and the reality once the phone is in their hands. The gap is even more pronounced for the current generation of AR glasses. Users are often underwhelmed by the technology given their high expectations.  To bridge the gap and have a feeling of presence (presence of the object) there remains work on at least the following:

  • Secondly, we need to find a way to handle lighting and occlusion vis see through displays.

Indeed, current AR glasses are based on an additive light system since the technology cannot remove light, which renders black as transparent and displays ghostly content in full light. As an example, consider the videos above which were edited in order to create the realistic vision; below is an honest rendering of screenshots from the videos if we were to use the technology available at the moment.

Business card video screenshot showing the transparency issue on dark areas.
Screenshot from the cereal box game that shows some ghostly content from a see through device perspective.

It is evident that this problem must be solved as it is a necessary condition for the adoption of glasses and to ensure that the use cases are in phase with the desires and collective imaginations of users. In the meantime, we develop our hypotheses and scenarios for a future where these technical constraints have been solved.

Being a tool rather than an end product leaves us the freedom to think holistically rather than limit ourselves to a particular technology as would a manufacturer, or to a particular market segment as would a startup. We therefore plan to regularly post short videos such as those in this article, in order to illustrate use cases that demonstrate a UX design vision.

Stay tuned! In part III we will outline our vision on technologies that could provide the foundations for solutions to future issues facing mixed reality and its adoption in the mainstream, and some inspirational development we are researching for future MR creators.

Update: Part III has been published and can be found here.

Article contributors: Dioselin Gonzalez, Lead Principal Engineer; Colin Alleyne, Senior Technical Writer; and Sylvio Drouin, VP - Unity Labs.

September 21, 2017 in Technology | 8 min. read

Is this article helpful for you?

Thank you for your feedback!

Topics covered