Search Unity

Unity AI 2021 interns: Graduating to better gameplay

September 22, 2021 in News | 11 min. read
picture of a mobile image
picture of a mobile image
Share

Is this article helpful for you?

Thank you for your feedback!

The Unity AI organization is working on amazing research and products in the areas of robotics, computer vision, and machine learning. Our AI summer interns all worked on relevant, AI-related projects that actually impact our products.

Launching high-quality and balanced games is hard, especially when there are numerous variables such as character attributes, level design and difficulty progression. Automated Quality Assurance (QA), one of the many products that Unity offers, automates and runs various simulations in order to balance a game and find the most optimal parameters. Automated QA can shorten the development feedback loop and increase team productivity.

During the summer of 2021, our interns worked diligently to create valuable contributions to our work at Unity. Read about their projects and experiences in the following sections.

Visual Simulations

Matthew Yang, BSc. Computer Science, Simon Fraser University

The purpose of the Visual Simulations package is to generate visualizations in Unity which are easy to digest and analyze. Users specify which events to record and the package generates a visualization based on the positions of all the recorded events. There are two visualizations which can be generated by this package: discrete markers and heatmaps.

For discrete markers, as each event is recorded during a playthrough, a marker is placed in the scene where it occurred. Markers for events will be placed sequentially which shows the order in which events occurred. In the following example, I will be using a game where players play as tanks who try to shoot and destroy each other. There are two events being visualized: the yellow markers are for when the player moves, and the purple markers for when the player shoots. In the visualization, we are able to see the path the player is moving along and where they shoot.

Instead of specifying an event once with discrete event markers in the scene, heatmaps aggregate events together by position. Users can customize the heatmap by setting the granularity of the heatmap, the colors for varying frequencies and the thresholds that determine these frequencies. The ML-Agents dodgeball environment in elimination mode was used as a demonstration with events being recorded whenever agents are hit, and the areas of the map changing color to exhibit where the most activity occurred.

The demonstration above shows how this package and ML-Agents benefit each other. Trained agents automatically produce event data which generates visualizations, and the visualizations themselves provide insight on how agents behave in different scenarios. Simulation is a powerful tool that allows for the comparison between different runs with the ability to easily view differences in performance as parameters change. The Visual Simulations package can be used for creating better tested and more balanced games, but also for other applications in Unity such as evaluating the navigation performance on a robotics system.

Flexible Test Planner

Oscar Lin, Computer Science, University of Toronto

Quality assurance (QA) plans currently support writing linear test paths. QA teams often have test plans involving commonly repeated steps, such as setups and teardowns. In a linear representation, these repeated steps clutter the overview of the test plan. To address this, I implemented a test generator with a feature that allowed for the compression of visualizations in a test path.

a test generator with a feature that allowed for the compression of visualizations in a test path

Utilizing the Automator feature in the Automated QA package for Unity, I implemented test generation with editable parts. While the individual tests are easily customizable, the problem of test plan inflation still existed as every single test had to be managed manually. 

I addressed this by implementing Staged Run, which is a test generator that solves the cluttering problem by supporting a simplified overview. The resulting product can generate tests rapidly with minimal coding as well as an improved view for duplicated procedures.

The Automator test generator and Staged Run tools are available for download in the Automated QA package for Unity.

Sprawl Metrics

Karen Chen, Computer Science, University of Toronto

Sprawl is an internal framework at Unity for running distributed machine learning experiments on your local machine and the cloud. Like with other machine learning systems, engineers wanted more insight into how their Sprawl jobs ran to better understand their experiments' performance. However, no metrics were being collected in the Sprawl framework. My summer internship project focused on providing out-of-the-box metrics in an intuitive form for Sprawl users to get a better understanding of their machine learning jobs.

out-of-the-box metrics in an intuitive form for Sprawl

I first researched different design alternatives and created a design doc with the help of my mentor. After several iterations on the doc, I finalized the details and hosted a meeting with the team to gain feedback on the project's overall design and architecture. I quickly moved onto the implementation phase following the design review and focused on developing the solution to the problem. The Sprawl metrics feature is made of two major components: a Prometheus profiler and a Grafana dashboard. The Prometheus profiler scrapes metrics from each process in the Sprawl run and displays the result in a pre-configured Grafana dashboard. Sprawl users can view the performance of their runs on Grafana in real-time, both locally and in the cloud. We have also defined different profiling levels based on our user's needs.

Building a Compute Graph for Game Simulation

Stephanie Wang, Computer Science, Carleton University

For my internship, I leveraged a framework and scheduling system for distributed computing experimentation being built inside Unity. The goal is to use the system to balance a parameterized game, just as our users do with Game Simulation, to learn how well the development experience and the overall system work for this use case.

The scheduler is a framework developed to enable building large scale simulations locally and in the cloud. It allows users to design compute graphs and facilitates communication amongst nodes. The framework allows more flexibility in computation and data processing. The project will test out the new scheduler and provide valuable insights about the process of implementing the compute graph and how it will work for end-users.

During my internship, I built and executed a GameSim workload with the framework. This workload was able to scale up to 50 simultaneous simulations and is expected to scale even further if more computing resources were allocated. The plan consists of 3 types of nodes that run in parallel; a loader node, simulation nodes, and an uploader node. The loader node reads a configuration file that holds the parameters of the game and parses through them. Then, the simulation nodes pull each parameter and perform the simulations while accumulating important statistics. Finally, an uploader node aggregates the data and uploads them to the cloud.

the simulation nodes pull each parameter and perform the simulations while accumulating important statistics

The compute graph provides the capability to easily scale up to 45 simulation nodes in a container, and well over 10 containers in a pool. With more nodes running simultaneously, we notice a large drop in the total time it takes to run all simulations. Furthermore, users can easily switch from local to cloud execution with a single command-line flag. 

I had the opportunity to utilize the Unity Engine to enhance the game used in the simulations. I worked on rebuilding some of the GameSim features including how metrics were accumulated and gathered. The data was aggregated in a stream-processing fashion, meaning aggregation was calculated as new data arrives. I worked with Kubernetes and Docker when trying to deploy my plan to Google Kubernetes Engine (GKE). Familiarizing myself with the various Kubernetes commands really came in handy when testing and debugging the compute graph.

The best part of working on this project is providing a proof of concept for how well the scheduler worked for the Game Simulation use case. The framework has so much more potential for many other use cases including computer vision and robotics while facilitating scalability and migration to the cloud.

Join our team

If you are interested in building real-world experience by working with Unity on challenging artificial intelligence projects, check out our AI careers page and students can see openings on out university careers page. You can start building your experience at home by familiarizing yourself with our Automated QA tools.

September 22, 2021 in News | 11 min. read

Is this article helpful for you?

Thank you for your feedback!