Search Unity

Perception 1.0: Expanding the toolbox for synthetic data

Perception 1.0: Expanding the toolbox for synthetic data | Lead image
Perception 1.0: Expanding the toolbox for synthetic data | Lead image
Share

Is this article helpful for you?

Thank you for your feedback!

Synthetic data in Unity now includes synthetic humans and homes in this major release of our open-source tools.

The success of increasingly ambitious machine learning (ML)-driven computer vision (CV) systems depends on high volumes of richly annotated data. Since performance of these systems depends so much on the data, gathering better data is often the best way to improve model performance. Collecting and annotating real data is so expensive and time consuming, however, that a data-centric approach is not feasible for many projects. To that end, thousands of teams have turned to Unity for building synthetic data for computer vision.

Today we are making a big leap forward in synthetic data with a major update to our open-source tools for building synthetic data, which we are calling Perception 1.0. A new Synthetic Humans package gives you diverse, poseable, randomizable, and deterministic 3D humans. The new Synthetic Homes dataset generator provides photorealistic home interior data for scene understanding and object detection. Additionally, the Perception package has been updated with depth and normal outputs, path tracing support, and Python visualization and analysis tools.

With this expanded suite of tools, we want to enable you to build high-quality synthetic data faster than ever.

Synthetic Humans, a diverse people generator built for CV

Synthetic humans

We are excited to announce the release of Unity Synthetic Humans, a 3D person generator built from the ground up for human-centric CV synthetic data. Thousands of hours went into creating the content using a mix of scanning, simulation, and 3D artistry. Today, we are sharing all of this work for free under an academic use open-source license. 

Unlike other digital humans, which provide small variations on a handful of base character models, Synthetic Humans create massive diversity by combining libraries of hair styles, fully anonymized facial and ethnic generation, body models, and clothing. As a result, each generated human is anonymous and unique – instead of generating a specific person, you control the sampling distributions on axes of randomization, including ethnicity, height, weight, and age. For example, a dataset with 20% teenagers, 45% young adults, and 35% middle-aged and older, can be specified in the generation parameters.

Synthetic human examples

Key features:

  • Wide range of diversity in age, body type, and skin tone
  • Rich labeling, including body keypoints and clothing segmentation
  • Fully rigged and skinned bodies and clothes compatible with the Unity Animation System
  • Placement randomization in 3D environments with collision avoidance

Synthetic Homes, a richly annotated home interior dataset

Synthetic home interior

We are also releasing Synthetic Homes, a large-scale dataset of synthetic home interiors, and the associated dataset generator. Use cases for in-home CV are innumerable, but gathering diverse home interior data in the real world is notoriously difficult due to privacy concerns and data collection restrictions. Synthetic Homes aims to accelerate model training for such applications by providing a large dataset of varied home interiors with accurate and rich labeling, as well as a configurable dataset generator. 

We include a variety of randomizations to maximize diversity. These include materials, furniture type and configuration, sunlight angle and temperature, day/night switching, interior lighting temperature, camera angles, clutter, skybox, door and curtain animations, and more. The dataset generator gives you control over many of these elements, enabling you to tune them to your liking.

Interior lighting in homes is complex and intentional, making photorealism especially important. We used Unity’s multi-bounce path tracing to accomplish physically accurate global illumination and reflections. This accuracy can help bridge the so called “Sim2Real gap”, improving a model’s ability to perform well in the real world after training on synthetic data.

The Synthetic Homes project includes a 100,000 image dataset, a configurable dataset generator, and a notebook for data analysis. The dataset includes rich labels for semantic and instance segmentation, bounding boxes, depth, and normals. It also includes environmental information like occlusion percentage and camera position. To enable you to iterate on the data we also provide the dataset generator, where you can tweak parameters like camera positioning, blur randomization, and image size.

A big update to the Perception package

Depth, Normals, and Path Tracing in Perception 1.0

The Perception package powers synthetic data in the Unity Editor. We are broadly expanding the package in this release to support new CV tasks, increase quality, and speed up dataset development. New Perception features include:

  • A conveyor belt sample illustrates the capture of video datasets with randomized object spawning and physics.
  • Ray tracing integration provides highly photorealistic images using physically-based multi-bounce lighting with minimal setup.
  • Occlusion labeler calculates how much of an object is occluded by other parts of the scene and how much is offscreen. This information makes it possible to filter out annotations on highly occluded objects.
  • Depth and normal labelers capture rendering layers that can be used for new tasks or give models additional information to perform better predictions.
  • Transparency and two-sided geometry support enables pixel perfect labeling of plants in agriculture and outdoor environments.

We also want to give the community more impact on Perception, so alongside this release we are also opening up the package to accept contributions from the community.

pysolotools, data analysis, and visualization in Python

Synthetic data analysis

Data exploration and analysis are critical to iterating quickly on synthetic data, and we want to make that as easy as possible. With this update we are introducing a new dataset format called SOLO, designed for large-scale image datasets extensible to new types of annotations and metrics. SOLO datasets are separated on disk by frame, enabling distributed dataset generation and processing.

In tandem with the SOLO format we are releasing pysolotools, a new open-source python package that provides utilities to work with the SOLO format. The pysolotools Python package provides an iterator interface that allows you to easily work with SOLO data on a frame-by-frame basis. It also includes a framework to easily retrieve common dataset statistics universal to most CV problems such as object counts, size distributions, and heatmaps. Finally, with pysolotools it is easy to write custom scripts to convert from SOLO into your needed data format. Even easier, pysolotools comes with pre-built converters to some popular formats, including COCO.

Visual inspection of synthetic datasets

Finally, we recognize that it is important to be able to visually verify the data that was created. pysolotools-fiftyone is an integration with the Voxel FiftyOne viewer to allow you to visually inspect SOLO datasets.

Get started

You can learn more about building synthetic data on Unity’s Computer Vision hub, including tutorials, case studies, and links to all of the content and examples. 

Synthetic data for machine learning is a growing, fast-moving field. Connect with the community and with us on the Computer Vision forums, where you can share questions and ideas.

Is this article helpful for you?

Thank you for your feedback!

Related Posts