As digital humans enter our world, one familiar face stands out to many Unity users. It’s Sua, the digital human who kicked off a new phase as a digital celebrity last June. This post takes a close look at Sua, Unity’s very first real-time digital human and the official face of Unity Korea.
At a time when increasing numbers of realistic digital humans are emerging, many as influencers, Sua is attracting more attention than ever. Hyeong-il Kim, CEO of On Mind, satisfied some of the public’s intellectual curiosity at Unite Seoul 2020, introducing current advancements in graphics technology and demonstrating the process of building a virtual human in his session on ”Meeting the Digital Celebrity, Sua.”
Digital humans are also called virtual humans, virtual influencers, virtual YouTubers, etc. These names all infer that they are “digital” beings that resemble humans. The term ”digital celebrity” refers, of course, to digital humans like Sua who gain a following in popular culture.
On Mind is doing everything it can to make sure that Sua has a fruitful career. The aim is for her to be a multitalented model, singer, actor, and more. As Kim says, “We’re keeping all doors open for Sua. She is no different from a newcomer about to make her debut. In order to become well-loved, she needs to go through the same process of gradually growing in fame and recognition as more and more people show her love and attention, much like typical celebrities.”
The reason why Sua is receiving attention on a global scale is that her responses are live thanks to real-time rendering in Unity. She is the first-ever digital human whose entire being moves in real-time.
The strength of real-time rendering lies in fast rendering speed and interactivity. Movies and animations that generally use offline rendering, as opposed to real-time rendering, create moving pictures by displaying pre-rendered images in a sequence of frames. Many hours of computing and rendering are required to create high-quality graphics after the content creator configures the camera angle and other details for each scene.
In contrast, Sua can appear live in 4K interacting with content courtesy of real-time rendering. At a minimum of 30 frames per second, the speed and rendering quality of Sua provides significant advantages over offline rendering.
Kim also said that interacting in real-time makes Sua special. “Unlike others we’ve seen so far, Sua is a real-time digital human who interacts in the present. This field is seeing a lot of research and development from various companies creating digital humans.” He believes that others will have similar outcomes as Sua someday, but for now, he is proud that his company is the world’s first to develop a high-quality, real-time digital human.
On Mind achieved Sua’s realistic rendering with the help of Unity’s High Definition Render Pipeline (HDRP). HDRP is a renderer created with Unity’s Scriptable Render Pipeline (SRP) for high-definition projects and operates on the following principles:
Rendering is physically based.
Lighting is unified and coherent.
Features function independently of the rendering path.
In the past, users could not modify the rendering pipeline. However, with SRP you can make direct changes to it, and HDRP offers a template for that.
Kim states that the robust performance of HDRP played a significant role in making Sua as lifelike as she is. Her skin was rendered using Shader Graph’s StackLit Shader, and her hair was rendered with the Hair Shader. Simply connecting the textures to Unity’s default shaders yielded high-fidelity visuals via HDRP.
With HDRP providing perfect physical lighting and the advent of hardware that enables powerful compute shaders, Kim believes that On Mind has managed to render a lifelike model that doesn’t enter the uncanny valley.*
*The theory that a highly realistic humanoid will provoke revulsion in viewers.
In addition to HDRP, the real-time rendering of Sua required other advanced technology such as motion capture of facial features and fingers, and virtual cameras. On Mind used an iPhone for facial motion-capture to render natural expressions. It was powerful enough to be used for real-time capture, and the blend shapes were feasible enough to work with Sua, who had been manually modeled. Extra care was devoted to the rigging settings to avoid the common mocap problem of the feet not touching the ground. To ensure natural-looking, fine finger movements, they employed a finger tracking system.
On Mind anticipates that further AI and digital human technology advancements will soon lead to increased demand for “digital humans as a service.” When AI performance of software-side functions such as voice recognition, voice synthesis, image recognition, gesture recognition, and chatbots become more natural, it will likely improve service accuracy and customer satisfaction, opening a new era of AI use in commercial settings.
While some may be skeptical about the general public fully accepting digital humans, some examples show it is indeed possible. A leading Korean entertainment company recently garnered a lot of attention with the introduction of a new K-pop girl group featuring real members and digital members interacting in a virtual world. The days of digital humans becoming real celebrities may be upon us sooner than we think.
Find out more about Sua on the creator’s Twitter channel.
Is this article helpful for you?
Thank you for your feedback!