Creating and shipping a multiplayer game isn’t easy and often involves many more parts compared to a singleplayer game as everything needs to be synchronized with other players to create that one consistent, shared reality.
One core piece of the multiplayer development puzzle is the netcode – the parts of the code that handle the “how” and “what” of communication between game players and the server.
However, the term “netcode” can come with a bad rap – it’s often what’s blamed for latency and poor multiplayer experiences.
Today, we’re diving into those common misconceptions and figuring out the netcode facts from fiction so you can network your next multiplayer game with confidence.
Netcode refers to the “create” part of multiplayer game development and is an umbrella term referring to the parts of a game that handle networking and synchronization between clients and servers.
In a multiplayer game, servers and clients communicate to each other by sending packets over the network. To create a shared reality between gamers connecting across distances, gameplay events such as moving a character or spawning an object get synchronized to other clients by sending a data packet to them. The part responsible for sending and receiving packets over the network is what's called a transport.
While it is possible to manually send those packets by calling the send functions of your transport directly, this pattern can quickly become overwhelming for programmers with little multiplayer experience.
A netcode library abstracts the sending of packets away from gameplay code with features such as networked variables and Remote Procedure Calls (RPCs).
1. "We can always convert to multiplayer later"
🚫 Fiction: In the development cycle of a game, multiplayer can be added on top at a later stage of development.
✅ Fact: Multiplayer can be challenging to implement. Games should account for multiplayer as early as possible in the design and development if you ever want players to have a multiplayer experience within it.
Why? Multiplayer touches pretty much every aspect of the gameplay, so it makes sense that it also impacts game development. For example, if you have an inventory system in a single game, a multiplayer game would need to synchronize inventory items to the server.
There are also a lot of things which are fairly easy to implement in a single player experience that can cause headaches when trying to put them into a multiplayer game.
Have you ever wondered why most multiplayer games use kinematic character controllers and have very minimal physics interactions? It’s done that way because implementing a physics simulation that is shared between multiple peers and predicting physics can be a real headache, even for experienced developers.
A good piece of general advice is to check early whether your features are compatible for multiplayer – especially if your game has a unique mechanic not commonly used in other games.
Check out Breakwaters by Soaring Pixels Games for an example of how they implemented multiplayer from the beginning and why that was so important for their small-scale cooperative title.
2. "Lower latency is always better"
🚫 Fiction: Lower latency is always better for multiplayer games. The lower the lag, the better the gameplay experience.
✅ Fact: While keeping latency low is important for delivering a smooth experience to the player, it is just as important to deliver a consistent one. Synchronizing states to create one shared reality may take tiny amounts of time that is unobservable to players in the overall experience.
Why? Delivering a smooth and consistent experience to all players is not always conducive to the lowest latency.
The most commonly used technique for improving the smoothness and consistency of the game is buffering.
Instead of processing incoming data packets from the network immediately, the packets are put in a queue. During each tick (a single update of a game simulation), the client then takes (ideally) one item from the queue while trying to maintain a certain size of buffered elements in the queue.
This ensures that when the server sends one packet per tick that the client also ends up always processing one packet per tick.
But why is this necessary? Wouldn't the client also just receive one packet per tick if it processed the incoming ones immediately? In perfect networking conditions, that's true, but packets which travel over the network all might have different transmission delays.
This fluctuation of the Round Trip Time (RTT) per packet is called jitter. Buffering is a technique which increases latency but reduces jitter, which ultimately often improves the player experience by providing more consistency.
Example: Fighting games often contain moves which involve quickly pressing a series of buttons in the right rhythm. Players learn these moves through muscle memory by executing them over and over again. For a fighting game to feel fair, it is very important that the resulting action of the player character is consistent with the input which is given.
So what do many fighting games do to achieve this consistency? They poll for inputs at a fixed rate and then buffer these inputs just for a little bit. By doing that, they map player input consistently onto the gameplay frame. The average input latency increases but the delay gets way more consistent.
While adding more buffering makes the game smoother, in many cases, this results in too much delay and the player ends up experiencing that their inputs are disconnected from the gameplay on the screen.
There are different techniques which can be applied to get the smoothness of buffering without the latency penalty. A client authoritative game applies the inputs for the own player character immediately to the player object and this minimizes the delay for the local player. Other players can still buffer the data for enemy players to display them smoothly.
While this approach often feels great to the player, it can cause other issues because it makes it much easier to cheat.
For competitive games, a technique called client-side prediction can be used. What it does is it applies the local player inputs immediately, but the server will also calculate the players actions by applying the same input and check whether a client executed a valid move and correct them if necessary.
3. “Bandwidth is free”
🚫 Fiction: My broadband contract is so cheap that it must mean bandwidth is free.
✅ Fact: Bandwidth is not free, and the cost can vary between different regions, with some regions charging significantly more for bandwidth used than others.
Why? Private broadband contracts are generally fairly cheap compared to the rate which applies to commercial servers. Your broadband contract is so cheap because most people will use a fraction of their bandwidth, and infrequently at that. Commercial servers are very different. They usually run during most hours of the day and game servers often support the traffic of hundreds of players. For that reason, it is quite common for hosting companies to charge on a per gigabyte used basis.
What this means for you is that saving bandwidth can be quite important to reduce operating costs. In addition to that, having a game with lower bandwidth usage will allow players with slower internet connections to better enjoy your games.
One of the most common causes of lag in multiplayer games is congestion of the home network of the user. While this often happens because there is other heavy traffic on the network such as video streaming, reducing the bandwidth cost of a game can still help to improve the player experience.
Building a multiplayer game is a challenging endeavor, but also an exciting one. Whether you’re building the next battle royale smash hit, or a cozy online co-op, understanding the nuances of multiplayer networking is essential.
Be sure to download the full e-book if you want to find out more common traps in multiplayer networking.