Search Unity

Advance your robot autonomy with ROS 2 and Unity

ROS2 photo of machine learning bot in a manufacturing environment
ROS2 photo of machine learning bot in a manufacturing environment
Share

Is this article helpful for you?

Thank you for your feedback!

Unity is excited to announce our official support of ROS 2, whose robust framework, coupled with simulation, will enable myriad new use cases.  

The Robot Operating System (ROS) is a popular framework for developing robot applications that began in 2007. Although originally designed to accelerate robotics research, it soon found wide adoption in industrial and commercial robotics. ROS 2 builds on ROS’s reliable framework while improving support for modern applications like multi-robot systems, real-time systems, and production environments. Unity is extending its official support of the ROS ecosystem to ROS 2.

Modern robotics is shifting its focus towards “autonomy,” the study and development of algorithms capable of making decisions in the absence of strict rules defined by a human developer, and simulation supports this transition by enabling greater flexibility and faster experimentation than real-world testing. We’ve developed an example, Robotics-Nav2-SLAM, to demonstrate how to get started simulating simultaneous localization and mapping (SLAM) and navigation for an autonomous mobile robotics (AMR) with Unity and ROS 2.

ROS 2 powering modern robotics

While ROS remains an excellent framework for robotics prototyping, it is reaching the end of its lifespan and is missing some features necessary to go beyond prototyping and into full-scale production and deployment of a robotic system. ROS 2’s technical roadmap was established and is maintained by a committee of industry veterans with explicit tenets defined for ensuring ROS 2 is a suitable framework for robotics end users. ROS 2 supports more operating systems and communication protocols and is designed to be more distributed than ROS.

Simulation powering autonomy

Many of the emerging use cases for ROS 2 focus on autonomy. Introducing autonomy means the decisions a robot makes and the results of those decisions are not neatly predictable using only a state machine and a collection of mathematical formulae, as they may be in many industrial robotics use cases. Compared to industrial robots, an autonomous robot’s operating environment is exponentially larger. The permutations of inputs it encounters far surpasses what can be reproduced in a controlled laboratory environment. To fully validate that an autonomous robot behaves the way you expect it to, you can either do it on the robot, in your own personal pocket dimension where time has no meaning and reality is everything and nothing all at the same time, or you need the next best thing: a suitably robust simulation.

If a robot is expected to sense an environment, a simulation must be capable of accurately modeling those sensors without making compromises with respect to the accuracy of the environment’s simulated topology and physics. If there are other agents in that environment, i.e., people or other robots, then the simulation must be capable of modeling the agent behavior, while still maintaining the accuracy of its sensor simulation, topology representation, and physics modeling. To fully exercise a robot against all the scenarios it might encounter, this simulation needs to be run many, many, many times. This is all to say that simulation in support of autonomous robotics requires four things not often required by industrial robotics: flexibility, extensibility, scalability, and fidelity – all without sacrificing performance. Unity sits at the intersection of all these requirements, which is why we are building more features into our platform to support development of autonomous robots.

With Unity’s Robotics packages, you’ll have access to the interfaces we’ve already built to make communicating with ROS or ROS 2 easy. You will be able to import existing robot configurations directly from URDF files with our URDF Importer, and you’ll be able to start exercising your robot against Unity’s high-quality, highly efficient rendering pipeline and a performant and accurate physics simulation. Through Unity’s Asset Store, you have access to a great variety of additional, premade environments and props to help you model your robot’s specific environment and task. With a few clicks, the simulation you assemble can be built and deployed to any mainstream OS, be it Windows 10, Mac OS, or Linux. Using C# scripting, Bolt visual scripting, or any of the many scripting and utility toolkits available in the Asset Store, you can continue to customize the functionality of your particular simulation to suit your specific use case.

One-click ROS 2 support

Moving your Unity project to ROS 2 is simple. In the ROS-TCP-Connector package, we’ve added a dropdown menu that allows you to toggle the package between ROS and ROS 2 integration. Upon changing the protocol, Unity will automatically recompile the package against the message definitions and serialization protocol that you’ve selected. To test it out, simply make this change in your own project, or pull down our example repository, Robotics-Nav2-SLAM, which contains the necessary components to enable using Unity as the simulated source of sensor and odometry information for the Nav2 Navigating while Mapping tutorial.

ros2 settings
Configuring Unity for ROS 2 communication

This example project demonstrates how to use Unity to simulate a navigation system running in ROS 2. The concept of navigation is straightforward and doesn’t change much in the context of autonomous robotics. Navigation algorithms aim to find a path from where one is to where one wants to be. However, to get from where one is to where one is going, one must first do SLAM – simultaneous localization and mapping. SLAM describes a collection of algorithms built to answer the question, “Where am I, right now, and where have I been?” Humans are performing SLAM constantly as an intrinsic part of the processing pipeline between our senses and our brain. For autonomous robots, performing accurate SLAM is still a challenging proposition for most real-world environments. What, exactly, an autonomous mobile robot requires to enable it to always know where it is, relative to everywhere it's ever been, is still an area of active research. The only way to really answer this question for a given use case is to try a lot of different things (sensors, algorithms, etc.) and see what sticks.

In our example, you will find a simple warehouse environment, a fully articulated model of a Turtlebot 3 mobile robot with simulated LIDAR and motor controllers, and a Dockerfile used to build an image containing all of the ROS 2 dependencies necessary to exercise the Nav2 and slam_toolbox stacks against our simulation. The steps of Nav2’s tutorials will provide useful context if you’ve never used ROS 2 or worked with SLAM algorithms before. To see this example work in Unity, all the instructions to get you started and the project running are in our repository.

Left: RViz display of ROS 2 messages generated in and sent by Unity. Right: TurtleBot 3 performing SLAM and autonomous navigation in Unity.
Left: RViz display of ROS 2 messages generated in and sent by Unity. Right: TurtleBot 3 performing SLAM and autonomous navigation in Unity.

Get started today

Roboticists new to Unity and Unity developers new to robotics are encouraged to try our ROS 2 integration and perform autonomous navigation with Robotics-Nav2-SLAM. This is just a small example of what you can build by integrating our robotics tools and the many other powerful packages available from Unity. In tandem, the Unity Robotics team continues to build and release features explicitly in support of common robotics use cases with an emphasis on scalability and extensibility.

Unity will also be hosting a workshop at ROSCon this year that extends the Nav2-SLAM-Example to support multiple robots with specialized roles working together to accomplish a specific task.

Is this article helpful for you?

Thank you for your feedback!