Search Unity

Developing for blindness and low vision in VR with Cosmonious High

March 16, 2023 in Games | 13 min. read
Cosmonious High guest blog – hero image
Cosmonious High guest blog – hero image
Share

Is this article helpful for you?

Thank you for your feedback!

In this guest post, Owlchemy Labs Accessibility Product Manager II Jazmin Cano dives deep into how the team used Unity to develop an all-new vision accessibility update for Cosmonious High in virtual reality (VR).

At Owlchemy Labs, we have made a dedication in our Accessibility Statement to the pursuit of creating VR games for everyone. Historically, our team has prioritized making significant strides in unsolved areas of accessibility in VR including subtitles, physical accessibility, and height accessibility. We encourage every developer to create with accessibility in mind, and actively incorporate accessibility into our games at launch and through post-launch updates.

For our last accessibility update to Cosmonious High, we added a range of gameplay options, including one-handed controller mode, seated player mode, subtitling, and more. If a game can only be played by an audience with no disabilities, it is a sign of an unfinished product. We take pride in forging new paths through research, testing, and partnerships to ensure gaming is available to the widest possible audience.

This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.

In this Unity YouTube video, learn how Owlchemy Labs developed for color blindness in Cosmonious High using Unity Asset Store tools.

Forging a path in accessibility

When we started our research in developing for blind and low-vision players, we found that the main issue was the inability to follow a storyline or in-game directions. 

Oftentimes, there is no subtext available for all of the scenes in a game, and different scenes are used to advance the story. A storyline could be complex and challenging to follow without context or subtext. Other aspects of a game, like completing a puzzle or a task, become difficult when not all pieces are clearly visible for players or when vital clues are given without any text or visual queues.

From our research, we found it was important to create a way for players to receive game information through audio descriptions using Text To Speech (TTS). Most people who are blind or have low vision use assistive technology called screen readers, which allows them to navigate screens with a keyboard, and their computer or phone will read out the text. Our game features are similar in the sense that they allow players to navigate virtual spaces using their hands or a joystick as methods for reading out objects and descriptions. Each feature is built to follow expectations regarding screen-reader standards, such as ducking non-TTS audio and allowing the user to cancel TTS mid-description.

We haven’t seen narration or TTS in other released VR games before, so we knew launching our approach for accessibility in this area – after researching best practices and playtesting with those who are legally blind or have low vision – would benefit the industry.

Now you see it

Developing for blind and low-vision players isn’t just adding audio cues and options. Blindness is a spectrum, and creating gameplay in a way that is fully inclusive of blind and low-vision players also includes feeling and seeing.

To accomplish this, we developed more haptic feedback as a method to identify an object when a player highlights it. This is particularly important when a player is completing a puzzle. Players can identify an object and feel it when they have selected it.

We’ve also added high-contrast object highlighting, which outlines key objects in the environment, making it easier for players to see the object and understand their selection with haptic feedback.

A GIF that showcases high-contrast object highlight being used within a scene in Cosmonious High.
A GIF that showcases high-contrast object highlight being used within a scene in Cosmonious High

To bring this all together, we incorporated a Grab-and-Release confirmation. Using this, players will receive haptic feedback when highlighting an object that is shown through high-contrast object highlighting, and audio will play to inform them when an object has been grabbed and released.

Developing for blind and low-vision players

When we started developing this update, we anticipated a few challenges, ranging from design decisions to playtesting. Thankfully, we had a great start to the project after meeting with Steve Saylor, a video game accessibility consultant who is blind. Consulting with Steve, we were able to identify the features we would need and what expectations someone with low vision would have. We did a lot of research, experimentation, and testing to determine what worked best and what we could execute successfully.

For the vision accessibility update, we knew we needed to branch out when finding playtesters with low vision to help provide valuable feedback and help with our design. We teamed up with VROxygen to find a group of playtesters with blindness and low vision to provide feedback on our iterations and help us prioritize improvements we needed to make, which worked out well. Opening testing to remote players from anywhere in the world allowed us to get a wide range of perspectives on the project, with feedback coming from people with varying levels of vision.

An in-game screenshot taken during playtesting for the Cosmonious High vision accessibility update
An in-game screenshot taken during playtesting for the Cosmonious High vision accessibility update

What worked, what didn’t

The path to creating this update offered some challenges to consider. Sometimes, what seems like the most obvious answer is not always the best one, especially when developing for accessibility. We took the time to work through each aspect of this update to make sure the features we were adding worked well for those who needed it most.

When we enabled TTS early in the project, we started with automatic narration. This meant that any object a player’s hands waved over would be described, even if that meant speaking on top of a previous description that may still be going. For audio descriptions to be valuable, they need to be heard without other audio fighting for priority.

This resulted in a few changes that worked well in playtesting. For example, we decided to add a button press to activate descriptions instead of having them read automatically, which led to a more comfortable experience. This gives players agency when deciding if a description is read to them when their hand is placed over an object or pointing at something in the distance. It also prevents accidental TTS from happening if a player moves their hand over objects they didn’t mean to have read.

While TTS describes the name of the object and a visual description, it doesn’t take very long. Even with short descriptions, though, we know people would want behavior similar to a screen reader (i.e., the ability to cancel audio while it’s being read).

Another thing we learned with regards to audio is the importance of lowering it so that the lower volume allows the TTS audio descriptions to have priority. We ran into a problem where TTS can be triggered during an interaction with an NPC, making the NPC’s dialog quieter and easy to miss. At this time, players are not able to “rewind” or “retrigger” the same audio to play; however, players can wave their hands and NPCs will respond back to help.

But one of the hardest parts about building features like these is making sure they will actually help users who need them. The best way of determining usefulness for new accessibility features is through testing. Of course, being able to quickly make new builds for all of our platforms after each round of feedback and development was essential to making this update the best it could be. One unlikely tool we found useful for fast iteration on our designs was Unity's Post-Processing Stack.

Before sending builds to playtesters, our developers wanted to test the effectiveness of features internally. Since many of our developers are sighted, we used the Post-Processing Stack and created entries in our debug menu that allowed us modify the visual clarity of what we were seeing in the headset. This helped our developers simulate roughly what it is like to have different levels of reduced vision while playing the game. Since we could now rapidly identify and tackle the most obvious issues, we were able to iterate on designs more quickly and make sure we were getting the most out of the external playtest sessions with blind and low-vision testers.

Key takeaways

In developing this update, we learned good design practices for blind and low-vision accessibility that we’ll use in future games. One of those learnings comes from an approach that started long before this update.

Our developers create with accessibility in mind, and we’ve learned from each accessibility update that starting a project with this approach makes developing future accessible features much easier. For example, making sure objects have proper names and descriptions that are useful as alt text on images requires less effort and saves time.

Having large objects and large text with good contrast makes a lot of a VR game’s world easier to see and read. Many players who described their vision as highly blurry could lean in and read much of the game’s text. Players who couldn’t fully make out text would then use the assist button to hear the words as needed.

The text in Cosmonious High is stationary, so players don’t need to worry about it being at a fixed distance from their face, moving away from them as they continue to lean forward. One player commented that we made the game accessible before even thinking about vision accessibility as he described the size and colors in the world not specific to this update. While we appreciate the compliment, the design practice of making things larger and easier to distinguish is part of our development process – it’s in our developer documentation.

Specific to this update, allowing players to decide when to get audio descriptions, allowing players to cancel audio, and keeping descriptions short with valuable information is key to giving players agency.

Developing for blind and low-vision players has added to our arsenal of development tools. We hope to build upon our learnings in all areas of accessibility and plan to launch future titles with more accessibility included. We’re excited to share this update of Cosmonious High as a continuation of our mission to make VR for everyone.

This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.

Video overview on the Owlchemy Labs YouTube of the vision accessibility update to Cosmonious High

Owlchemy Labs’ Cosmonious High is available on multiple platforms. Check out more blogs from Made with Unity developers here.

March 16, 2023 in Games | 13 min. read

Is this article helpful for you?

Thank you for your feedback!

Related Posts