Search Unity


Is this article helpful for you?

Thank you for your feedback!

Since 2018, Cross Compass has integrated Unity into the pipeline of several of its consulting services for the manufacturing field to train and validate AI algorithms safely before deployment. Read on to learn how this AI company came to use gaming technology to add value to such a mature industry.

Cross Compass is a leading AI company providing state-of-the-art solutions to global industry leaders in manufacturing, robotics, gaming, healthcare, design, and marketing. Established in Tokyo in 2015, Cross Compass develops cutting-edge techniques in Deep Learning, Machine Learning, and Artificial Life, to increase safety, quality, and productivity for the benefit of society.

We invited them to share in their own words why they embraced Unity and how it helps them deliver the following benefits:

  • Offers a platform for discussing specifications and progress with clients and partners
  • Avoids many safety checks required to set up the data collection environment
  • Provides an unlimited amount of data for AI training and testing
  • Allows for faster iterative cycles of AI training and testing in simulation
  • Leads to the delivery of AI solutions with higher performance and quality to end users
  • Increases the value of human intervention while AIs handle the repetitive tasks reliably

Learn more in this guest post from Cross Compass by Romain Angénieux, AI Simulation Group Leader; Steven Weigh, Global Brand Identity Designer; and Antoine Pasquali, Chief Technology Officer.

Challenges in introducing AI for manufacturing environments

Designing and deploying cutting-edge AI solutions for manufacturing environments is a complicated process. Manufacturing production lines have been meticulously optimized and perfected for decades. Experts have designed, tweaked, and iterated upon every detail end-to-end, to ensure the highest efficiency, safety, and quality standards that meet strict industry requirements and tight delivery schedules. This results in zero room for experimentation, disruption, risk, or unproven methods.

AI by comparison, is evolving at lightspeed. Every other day brings new research on the latest methods, expanded possibilities, and new frontiers. However, most of this research, only exists in the lab, built upon carefully curated data that bears little resemblance to the noisy, unstructured, unlabeled, or as is often the case, the complete absence of data that exists in the real world. In stark contrast to manufacturing, AI rarely takes the time to validate itself under real-world conditions. The two industries couldn't be further apart in their approaches.

In a lab, reaching 99% accuracy is a laudable achievement. In the manufacturing environment, a remaining 1% error rate is an unacceptably high failure, defect, or safety risk that can have severe real-world consequences. Given this dichotomy, how might we introduce the latest AI solutions into such a precise, constrained environment? And how might we experiment with, validate and deploy AI solutions in a way that doesn't introduce risk, cost, downtime, or some combination of all three? These are the questions we were asking ourselves when tasked with training and deploying AI on our clients' factory floors.

Using simulation to develop AI solutions

The most obvious solution was to bring the manufacturing environment into the lab. That is, to recreate the factory floor in a simulated environment where we can develop our AI solutions without fear of downtime or damaging state-of-the-art equipment.

A simulated environment gives us total control over factory conditions, allowing us to change parameters, experiment with, disrupt, and validate our algorithms in order to investigate new AI solutions. In other words, simulation lets us do all the things we can't do in the real world.

How we chose Unity

In 2018, we conducted an analysis of the solutions on the market in order to determine which technology would best match our needs in terms of simulation.

The goal was to make it easier, faster, and safer to set up the environment, to collect the data, and to validate the AI’s performance before deploying it on the physical robot.

We began by examining robot-specific engines designed to simulate robot’s behaviors, joints properties and dependencies, and sensors. These engines are extremely accurate in terms of physics, behavior control and low-level robotics. However, and despite their strong attachment to realism, we found that these engines lacked the flexibility to recreate more complex scenes.

This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.

Meteorological Domain Randomization (MDR) for robotic applications. We make use of Unity’s High Definition Render Pipeline (HDRP) and the Shader Graph workflow to create variations in sky conditions, lights, backgrounds, and object textures. (Courtesy of Cross Compass)

In the context of AI, although we indeed require accurate physics and perfect control over the robot’s behaviors, we also need to import a wide range of objects of different shapes, with realistic textures and visuals, such as lights, shadows, camera effects, and so on.

Here, we discovered that game engines responded well to this variety of demands, as they offer simple answers to these other constraints. Robot manipulation would still be possible at a higher level of control, which matched our strategy of developing hardware agnostic solutions.

Notably, Unity allowed us to focus on creating only the features needed for training AIs on robots, while leaving the rest to the engine. To save development time, we could directly rely on its existing file importer, rendering system, physics engine, script lifecycle, scheduler, and deployment options.

In addition, Unity offers regular updates as well as contributions from collaborators to tailor the engine to more advanced applications such as our own. The active support of Unity’s ecosystem would ensure that any potential issue would be addressed properly.

At the end of our analysis, Unity emerged as our best option for its versatility and evolvability.

How we use Unity to bring AI to manufacturing automation

Today, Unity developments are fully integrated into our processes for manufacturing automation. On the research side, we create diverse scenes, ranging from picking to navigation to adaptive control based on sensor feedback, to test the robustness of our AI algorithms under unexpected conditions, and to advance our technology further into uncharted territory.

Each research project originates from a client’s need and is then expanded toward a more general solution to address similar cases. We integrate these solutions one after the other in our main simulation environment as packaged assets, so as to maintain a continuous development workflow, compatibility over time, and clean code.

Since we started using Unity, we have mainly developed features for importing objects and robot parts, creating realistic scenes, and applying domain randomization techniques. In parallel, we have established communication protocols with our AIs and with third-party robotic software, as well as different simulation configurations for data generation, AI training, testing, and validation in real time for all scenes.

Unity offers the required flexibility for such expert developments, such as a calculation of physics independent from the simulation speed, which allows generating accurate data a hundred times faster than what the eye can see. The Asset Store and tools from Unity partners also provide us with occasional tweaks and features for faster progression.

Unity has allowed us to significantly reduce the time and cost involved in training, testing, and deploying AI solutions to our clients and partners. The result is higher levels of safety, an increase in the value of human intervention on the factory floor, and a higher quality product delivered to end users. Unity's physics engine and features allow us to control every aspect of the simulated factory floor, resulting in more precise and robust AI solutions than ever before.

Here is a common workflow for using Unity in our projects:

AI solution mockup

On consulting projects, we create branches of our main Unity environment, where we can work freely to meet the specific needs of clients, later merging the added features back to the main branch. Typically, we would start by making a mockup of the solution by selecting and customizing a preconfigured scene with our relevant assets. Showing a demo in simulation to clients helps clarify the specifications of the project as well as its end goal in terms of deployment in the factory.

AI training and testing

We then prepare the environment and the AI for training. Simulation affords us the luxury of data generation, whereby we generate data in a much faster, safer, and flexible manner than real-world collection would allow. Only AI experts can tell the relevant information to get from the simulation; however, the data labeling is obtained for free. This means that we can provide the AI with any data, with the highest precision, whereas in the real world, the same data might be hard or even impossible to gather. Further, there is no limit to the amount of data generated.

This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.

Domain randomization applied to the visual input of the AI. MDR techniques are applied in simulation to ensure the AI performs effectively under real-world conditions. (Courtesy of Cross Compass)

At this point, we test our models and fine-tune their accuracy up to the client specified margin of error to ensure a robust final solution. For this, we need to train our AIs so that they are effective in the real world, under unexpected variations of light and noise received by cameras and other sensors. Our packaged asset for domain randomization was specifically designed to bridge this gap. We then validate our AIs in real-time simulation. This is also used to demo the AI solution to clients.


Finally, we deploy to the factory. Our robotics engineers, with the assistance of system integrators, prepare, depending on the number of deployment items, either a test bench or the final system directly at the customer’s site. AI engineers operate the first tests, and typically a member of our simulation team visits as well to verify the conformity with the simulated scene used for training. This allows for quick adjustments, if necessary, before shipping the final AI, trained on a much larger dataset, before finally instructing the factory technicians on how to use their newly acquired AI algorithms.

What’s next?

This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.

Evolution of our picking solution across versions 1.0 to 3.0. Version 1 replicated realistic physics conditions when picking objects from the bucket. Version 2 focused on simulating the robot and the grippers. The MDR was then developed and applied in Version 3. The AI techniques were also perfected throughout each version, as can be seen in the camera widgets. (Courtesy of Cross Compass) Unity is a work in progress just as much as our own development is, and our code-base and processes are improving with each new project. We haven’t yet faced a challenge that we can’t meet using Unity and our expertise.


Get started with robotics simulation with Unity.

July 24, 2020 in Industry | 9 min. read

Is this article helpful for you?

Thank you for your feedback!