We created Unity Sentis to give developers the ability to bring AI models into games and applications. Now in pre-release open beta, Sentis allows for complex features like object identification, speech recognition, and smart NPCs in all types of projects.
Once imported via the ONNX file standard, these AI models can be run directly on all Unity-supported platforms. This means that you can run the majority of AI models directly in the Unity Runtime on a user’s device, without the need for cloud infrastructure.
Ultimately, the model you choose to use in your project is completely up to you. Each model is dependent on the task you are trying to solve. You might start by browsing some interesting models on marketplaces such as Hugging Face, Keras, or PyTorch. You can also train your own model if you have a machine learning background, or use Unity ML-Agents for reinforcement learning needs. The main requirement is that the model must be converted to the ONNX file format. You can use an ONNX converter like TF2ONNX, if needed.
See Import a model in the Sentis documentation for a code sample.
Loading a model into Unity requires the same process you would follow with any asset. Simply drag and drop it into the Assets folder of the Project window within the Editor. Sentis will automatically optimize the imported model. Then, create a runtime Model object.
See Load a model in the Sentis documentation for a code sample.
Creating an input is fairly straightforward, just check the shape and size of the required model input in the ONNX Model Import Settings. You can then create a tensor from your data source. If multiple inputs are needed, store them all in a dictionary.
See Create input for a model in the Sentis documentation for a code sample.
When you're ready to run your model, you need to create a worker that breaks the model into tasks that can be run on the user’s device (CPU or GPU). Creating a worker can be achieved with this code example.
Once your worker is set up, it’s time to run your AI model. Here, you are hooking up the inputs and outputs of the model to your game code, and then using the profiler to see if you are within budget. If it’s taking too much budget, you can “slice” your model across many frames, or explore other performance tuning options in Sentis.
See Run a model in the Sentis documentation for more information on how to run your model, get outputs, and optimize the output.
The final step is to test and deploy your game. Do this as you normally would on any Unity Runtime platform. You have a few options for shipping your model within the game binary: It can be embedded in the build, or you can run it as a streaming asset so that it’s downloaded only when needed. You also may consider encrypting your model for security reasons.
See Encrypt a model in the Sentis documentation for a code sample.
This beginner sample shows how to use the basics of Sentis with neural networks by running an object detection model to unlock doors in a locked room. It runs a handwritten digit (number) detection AI model called MNIST which can read written numbers and identify the most probable number drawn.
This sample uses Sentis to build a bot opponent for a board game called Othello, where the game has configurable difficulty. It runs a neural network trained on the game rules and determines game win probabilities after each move, then predicts future moves that are most likely to win. It’s a simpler solution than a traditional approach using complex heuristics and tree traversal.
This sample showcases how to integrate Sentis into an augmented reality (AR) experience. It uses a depth estimation neural network to allow real-world objects to occlude objects in the game scene. The depth is determined by processing video frames from the camera, so it is a more scalable solution than the traditional approach of using a lidar sensor which is limited to expensive phones.
This works on mobile devices with a camera and does not require a lidar sensor.
AI models can help you create engaging features that may either be impossible or very time consuming to develop with traditional code. The use cases span all categories of AI models, and the application depends on the models you choose to implement. Here are some examples, however, of instances where Sentis can aid the development process.
One big reason developers choose Unity is the ability to publish more easily across multiple platforms – but optimization can still be a challenge. Using an upscaling model like Super Resolution from TensorFlow allows you to upscale low-resolution images or textures in your game to get to production quality, or help optimize assets only when needed, across different devices.
Player interaction is key for connected online games when it comes to engaging with both NPCs and other players. With a speech-to-text model like OpenAI’s Whisper, you can convert live speech to in-game text. You can also bring an AI model in to automate dialog and create meaningful interactions between players and NPCs without the limitations of manual scripting.
While a lot of focus in AI is on creating novel features, we’re also seeing great applications when it comes to improving game performance. One example of this is using an AI model to improve ray tracing on mobile by using an upscaling GAN AI model to hallucinate pre-rendered frames of a game scene. With an application like this, you could implement path-tracing features such as light refraction and caustics area lights in smaller projects without a hit to performance on the user’s device.
Augmented and virtual reality (VR) are also a great potential use case for using AI models with Sentis. For example, you can use the Ultralytics YOLO model in VR to detect objects in a game scene, or in AR to detect real-world objects from the device camera feed. This can offer the user a super-vision sense that is only possible with AI.
Unity Sentis is now available for free in open beta to all Unity developers operating on Unity 2021.3 or higher through the Package Manager.