We believe that the world is a better place with more creators in it. We make tools and services that help creators succeed, from individuals building their first games to professional studios working on the next great franchise.
That’s why we continue to be excited by the promise of AI- and ML-driven techniques to reduce complexity, speed up creation, and, most importantly, unlock new ideas. Simply put, we think that this technology’s accessibility will help more people to become creators.
We’ve worked for years, both internally and with partners, to explore how AI can be used in simulation, content creation, and game optimization. We see the present moment’s Cambrian explosion of generative AI as an opportunity to go even further.
Unity is uniquely positioned to help you succeed while adopting generative AI because of the Unity Editor, runtime, data, and the Unity Network.
More people use the Unity Editor to create games and other real-time 3D (RT3D) experiences than any other workflow in the world. Over the last 18 years, the Unity Editor has helped to democratize game development while contributing to a massive proliferation of new games across countless devices.
Today, we strongly believe that the power of generative AI will enable Unity creators to be much more productive while ushering in scores of new creators who will face lower barriers to building RT3D games and experiences. We think that these AI tools will complement rather than replace existing tools and workflows. They offer the promise to help creators do more for and by themselves by filling the gaps in skill sets and resources so they can achieve what scarcely seems possible today.
Just as a student might use a generative pre-trained transformer (GPT) tool to jumpstart research or even create a first draft before refining and finalizing a paper in Microsoft Word or Google Docs, Unity creators will be able to use natural-language generative tools together with deterministic, non-AI tools to create code, animations, physical effects, or other real-time content. Creators will move back and forth from rough approximations and text to fine-grained controls and code to iterate and refine the experience they envision.
What’s better, we’re building the technology in the Unity Editor to better define what AI draws from. This not only means using appropriate and licensable datasets for generating content but also integrating AI techniques that are customized to their specific content (for example, by using Low-Rank Adaptation, or LoRA, language models during asset builds to deliver new content that’s trained on their existing work).
The Unity runtime powers the most real-time applications in the world, with billions of downloads on billions of devices every month, in well over 100 countries. This means that Unity is the predominant way that content created with AI tools will come alive for consumers and users, since the output of any generative AI creation tools made available in the Unity Editor get delivered via the Unity runtime. The Unity runtime makes 3D content interactive and available on almost any device, ensuring that it responds to user input, as well as simulating effects like lighting or physics.
But we see an even bigger opportunity. We believe that AI is not just the domain of creation tools, but that it offers the opportunity for new forms of interaction by moving inference – the process of feeding data through a machine learning model – to runtime.
We’ve been working on this technology – code-named “Barracuda” – for more than five years. What will it mean when designers can build game loops that rely on inference on devices from mobile to console to web and PC? What happens when that AI capability is fast, efficient, scalable, and does not require expensive cloud compute?
We have some ideas – NPCs that come to life, diffusion content as a gameplay mechanism, boundaryless user-generated content – but we know that our creators will do far more with this technology than we could ever even dream.
Most of the digital content in the world today is 2D and linear – think sprites, photos, a set of film frames, a rendering of a building floor plan, or source code. AI data models train on this information to learn and, in the case of generative AI, to create content.
Unity enables the real-time training of models based on unique datasets produced in the creation and operation of RT3D experiences. Through this training, we can build ever-richer services on top of Unity and provide extraordinary capabilities for our partners to leverage Unity as a data creation, simulation, and training engine for their own needs. Natural-language AI models incorporated into the Unity Editor and runtime train on real code and images. That real-usage training data is abstracted from its initial use (it’s not captured or recorded as-is), however this learning enables Unity’s customers to substantially increase their productivity.
The Unity Network, which consists of our analytics tools, ad networks, publishing systems, and cloud services, reach a combined total of more than 4B users each month. Each of these service fields yields data that we can use to help our customers massively improve how they attract new users, increase engagement, or drive greater revenue from that base. Unity has been using the power of neural networks to help continuously optimize systems to support user acquisition, engagement and monetization for over three years.
Generative AI has been used in some form or another for much of the history of video games, and it has tremendous potential as a tool to help developers achieve more with fewer resources. We’ll be sharing more over the coming months about our vision for AI at Unity, what we’re working on, and how this technology can help you achieve your vision.
Stay tuned to the blog for more about Unity and AI, and, if you haven’t already, sign up for the AI Beta Program to be the first to hear about new tools and services.