Search Unity

How Half Mermaid designed IMMORTALITY’s match cut mechanic

December 13, 2022 in Games | 16 min. read
How Half Mermaid designed IMMORTALITY’s match cut mechanic | Hero image
How Half Mermaid designed IMMORTALITY’s match cut mechanic | Hero image

Three-time Bafta award-winning writer and director Sam Barlow (Her Story) visits the Unity Blog to share how Half Mermaid (Telling Lies) achieved the team’s game design goals for its latest title, IMMORTALITY. Released this August, the experimental narrative-driven single-player is already gaining recognition for its innovative storytelling and match cut gameplay, including an Official Mention at Tribeca 2022 and nominations for IGN’s Best Game of the Year and Best Game Direction, Best Narrative, and Best Performance at The Game Awards 2022.

When Half Mermaid started to conceive of IMMORTALITY, we gave ourselves the goal of creating a game that would allow players to get up close with the medium of cinema in a tactile and meaningful way.

We started by exploring mechanics for playback, giving players control over the playing of our film footage that would foreground the analog nature of film. We let players slow down and reverse the footage to remind them that the illusion of motion and life in a movie is a trick assembled from 24 pictures a second. And with sounds taken from real-life Moviola machines and the assistance of their physical clicks of mice, keyboards, and controllers, we evoked the mechanical nature of film.

Alongside these gameplay elements, our team’s attention to detail on set when filming our footage allowed us to showcase the chemical (light and celluloid!) and optical (lenses, so many lenses!) aspects of classic film.

This left one important ingredient: magic.

This content is hosted by a third party provider that does not allow video views without acceptance of Targeting Cookies. Please set your cookie preferences for Targeting Cookies to yes if you wish to view videos from these providers.

Concepting and preproduction

The magic of cinema is famously located in the cut. When a movie splices together two shots, it is able to teleport across time and space, to create subjectivity, make connections between images, ultimately to tell its story.

Generally, video games avoid this magic – outside of cutscenes, most games cling to a continuity of space, time, and camera. So clearly what would make IMMORTALITY special would be finding a way to conjure up the magic of the cut – and make it part of our gameplay.

Our more prosaic ideas didn’t seem like they’d work. For example, gamifying an editing interface – laying out the clips on a timeline and engineering a cut would rob the cut of its energy and surprise. Instead, we wondered about giving the player the ability to conjure a cut out of thin air –  could we give them a magic wand they could wave to make a cut happen?

It would need to be more precise than just saying “cut here,” however. There are so many potential cuts: How could we make things more intentional? That’s when we started to think about the idea of the match cut – cutting between two images or actions that are similar (e.g., here or here) – and connecting that to the genre of photography games.

IMMORTALITY is a game about exploring hundreds of pieces of lost film footage
IMMORTALITY is a game about exploring hundreds of pieces of lost film footage

Designing through a camera lens

In photography games, players are tasked with pointing a camera (like a gun reticle in a FPS) at a specific subject in order to best capture them. There are many things to photograph, and there is a degree of aesthetic skill in framing and picking the right moment. We had the idea that we could allow players to scrub through the footage (like walking back and forth along a path in a photo game) and pause to point to a specific item, in the way that an editor might pick a specific frame for their cut.

When the player selects a specific image in IMMORTALITY, the game will then cut to another scene in which that image appears. In this way, players can express their interest in certain characters, items, or pieces of imagery. We were fans of the ‘supercut video’ (e.g., here and here), in which people cut together sequences of recurring imagery across movies by specific directors – highlighting the obsessions and visual threads that run through their work. 

Turning this concept into a gameplay mechanic seemed like it would deliver on the magic we were looking for. It would give a great deal of expression and control to the player – they choose when to cut and what to cut off of, amongst thousands of options – but also pair it with some magic and surprise – the game would decide where and when to cut to. It would be a dance between the game and the player.

An example of a match cut framing and cross fading between two scenes
An example of a match cut framing and cross fading between two scenes

Establishing core gameplay

The basic mechanics of this idea came pretty easily – after some experimentation, we decided to make match cutting its own “mode,” to give it more importance, rather than just a basic mouse click or screen tap.

When the player enters ‘image mode,’ we add some barrel distortion and vignetting to surround the player’s cursor, evoking the idea of a director examining film with a loupe (a special handheld magnification device used to examine film negatives) whilst also recalling the idea of a conventional FPS sniper scope mode.

This process of having players pause the footage and then shift modes obviously removes the fluidity of a montage, but it also takes them closer to the POV of a film editor and makes them (perhaps subconsciously) think about the process. While this part came easily, we then had to tackle the tricky technical aspect – how to actually pull off the magic trick of the cut itself.

To make match cuts, the game would have to know what was on screen in every frame, and be able to connect each item to others of its type, as well as make sensible decisions about where and when to cut to.

Entering ‘image mode’  adds a subtle vignette and barrel distortion around the player’s cursor
Entering ‘image mode’ adds a subtle vignette and barrel distortion around the player’s cursor

Finding inspiration in filmmaking techniques

We looked into the various methods (some AI-powered) for image recognition and realized that what we needed was very specific and could not allow much room for error. If a player clicked on a flower in a specific frame, it would break the magic if the game decided they had clicked on a feather. If the game attempted to frame a match cut, but the framing was off by even a small amount, it would not feel elegant or conjure the magic of cinema.

In trying to find a method that would give us a degree of control, but also scale across the entire project – hours and hours of footage! – we looked at movie post production. There are many tools for automating and aiding the tracking of objects across film for use in VFX, matting, and so on. We created a plug-in that would allow us to export tracking masks from Adobe After Effects and pull them over into Unity.

Here we see the image masks after being imported into the Unity game engine
Here we see the image masks after being imported into the Unity game engine

Handling object tracking

One of the first things that we saw was that, in post-production, where everything is run offline and on supercharged render computers, there’s no economy regarding frames and polygons – if you track a face, you are going to get data that tracks every single frame, down to a level of detail that is pixel perfect. That volume of data was not going to work for us in real-time, so we created an interim step that would take the After Effects data and compress it, allowing us to define thresholds for how much accuracy and movement would be allowed, reducing it to an animated simple polygon that would still accurately map to each object as it moved across the screen.

The system had to deal with overlapping items. When tracking masks, most post-production flows will track an object when it is obscured by a foreground element, but for our gameplay purposes we only want to click things that are visible. Our system would distinguish between occlusion by larger or smaller objects.

For example, if a small candle passes in front of a character’s face, it will not prevent the face from being tracked; but if the small candle were to pass behind their face, it would be removed from the data.

Tagging the footage

Once we had the objects being tracked in the footage, we then created a database which the importer would help populate and where designers could add information to help the code. Each object would be given an ID (e.g., “apple”, “robert_jones”), which would then have attached some stats (how it related to our games themes; this information was also used to drive the music system) and which groups it belonged to (so, for example, an apple and an orange were both fruit,” whilst “robert_jones” and “robert_jones_ambrosio,” robert jones playing the part of ambrosio, were both instances of “robert jones.”).

With this information, we then set about creating the algorithm that would determine how to make a match cut. Given a given object, the code would pull up all instances of that object (or related objects, so, from the above example, clicking on an apple we might also look at oranges) and then for all the possible frames, assess:

  • Is the size and position of the object good? We want to cut to something that is nicely framed.
  • Has it just entered the shot, or is it about to leave? We want an image that is settled onscreen.
  • Are we stepping on any dialogue? Ideally, we don’t cut into a scene in the midst of speech.

Much of this was precalculated offline at the point where the mask data was imported into Unity, and then attached to the scene – so the game could quickly pull up a collection of pre-vetted frames at runtime when it would need to be instantly cut.

A closeup on Marissa Marcel during a transition scene
A closeup on Marissa Marcel during a transition scene

Making the cut

When a cut needs to be made, IMMORTALITY’s gameplay systems weight each possible destination against each other, to decide which might be best at each moment:

  • The game would try to match a destination scene with the player’s current thematic thread (what they watch and click on accumulates a general “thematic score” over time).
  • The game would weigh exact matches (apples to apples) over type matches (apples to oranges).
  • The game would be inclined to pick a fresh scene – this would be weighted more heavily the further in the game the player has traveled.
  • There are a few cyclical ‘heartbeats’ in the game that cycle between objectives, and so depending on where this ‘beat’ is the game might weight scenes with more dramatic/plot importance, and also those that contained ‘spooky elements’ (the longer the player goes without being spooked, the more weighting is applied here).

These weightings were combined and mixed and emphasized in an algorithm which we continued to tweak throughout development and test against playtest data collected.

Screenshot from the set of IMMORTALITY’s detective film, Minsky
Screenshot from the set of IMMORTALITY’s detective film, Minsky

Testing the experience

Once we had the process working, we created a series of tests using clips from existing movies to prove out the pipeline and refine the various thresholds. From here, the process was established – for each clip, the Half Mermaid team would review and tag a single instance of each object within the scene, giving it a unique ID. Then, a team of rotoscopers would take the scene and fully track all the objects – exporting the resulting huge pile of data into the game-ready format.

The Half Mermaid team would then review in game using debug tools we created to allow us to quickly jump through every item in a scene, cycling through frames the game code highlighted as being good potential cuts. The rotoscoping would always be reviewed in game to ensure nothing strange had been introduced in compression, and to further verify that the match cut algorithm was picking out ‘good’ frames against the final data.

By creating a data-driven system as described above, we were able to create a mechanic that was rich and player-responsive – it wasn’t reliant on designers tagging connections, or picking out the cuts that they had anticipated. As shipped, IMMORTALITY has over a million possible cuts that it can make – far too many connections for us to have laid down by hand. And yet, the system is capable of creating magic!

When we started prototyping these elements, we set a target of aiming for one cool cut per play session, justifying that players would remember the cool cuts and take that forward with them.

Engaging the audience

When we started testing IMMORTALITY with real people, we saw that the hit rate was much higher – the combination of footage and system was able to almost constantly surprise players with interesting connections and juxtapositions.

What often makes my games stand out next to other narrative games is an ethos of creating large data structures and then handing over the rest to the player’s intuition and a robust algorithm – rather than a predefined set of branches and choices. This makes the experience of both creating the game and playing it to be one that is more exploratory and surprising. IMMORTALITY is perhaps the most magical example of this thinking in practice we’ve made yet!

Half Mermaid’s IMMORTALITY is available now on Xbox Series X|S, PC, MacOS, and iOS and Android via Netflix Games. For more inspiration and resources, visit Unity’s Create Games hub.

December 13, 2022 in Games | 16 min. read
Related Posts