In the first post of this series covering the complete landscape of Unity Realtime Multiplayer, we'll cover networking basics, and key considerations when acceptable player experiences. In particular, we'll talk about network speeds, the infrastructure behind them, possible delays, and methods of dealing with those issues.
Network interaction is critical considering for most modern games, whether mobile, console, PC, or VR . It doesn't matter if you're creating a simple multiplayer game or an ambitious MMO — network programming knowledge is key.
Hello everyone, I'm Dmitrii Ivashchenko, a Lead Software Engineer at MY.GAMES. This series of articles, on the "Unity Networking Landscape in 2023" will cover critical aspects and constraints of network environments, delve into various protocols (including TCP, UDP, and WebSocket) and highlight the significance of the Reliable UDP protocol. We'll explore the impact of NAT on real-time multiplayer games and guide you on preparing game data for network transmission.
We'll look at topics ranging from the basics to more advanced concepts like transport protocols, network architecture patterns, ready-made solutions for Unity, and more. We'll analyze both official Unity solutions and third-party tools to help you find the optimal choice for your projects.
In this first post, we'll cover the critical elements of network programming and look at the obstacles and issues developers often face when creating games that feature networking.
The Internet is a complex system comprising various devices, each with unique functions. Let's talk about some of those. Typically, an individual's connection to the Internet begins with a device such as a computer or a smartphone. These connect to a local network through routers or modems, which enable communication between the local network and the ISP.
The ISP has larger routers and switches that manage traffic from multiple local networks, and these devices comprise the backbone of the Internet, which includes a complicated network of high-capacity routers and fiber-optic cables spanning continents and oceans; separate companies known as backbone providers are responsible for maintaining this backbone.
Additionally, data centers house powerful servers where websites, applications, and online services reside. When you request access to a website or online service, your request travels through this extensive network to the relevant server, and subsequently, the data is sent back along the same path.
Before diving into the world of TCP, UDP, Relay Servers, and real-time multiplayer game development, it's critical to have a solid understanding of network systems as a whole. This involves understanding the roles and functions of devices like hubs and routers and an awareness of any potential issues that can arise from the operation of these devices and mediums.
Network technologies aren't isolated from the physical world and are subject to several physical limitations: bandwidth, latency, connection reliability — all of these factors are important to consider when developing networked games.
Understanding these basic principles and constraints will help you better evaluate the possible solutions and strategies required for the successful network integration of your games.
Bandwidth is the maximum amount of data that can be transmitted through a network in a specific period. Data transmission speeds directly depend on the available bandwidth: the more bandwidth, the more data can be simultaneously uploaded.
Bandwidth is measured in bits per second and can be of two types: symmetric (with equal upload and download speeds) and asymmetric (with different upload and download speeds).
Symmetric connections are usually used for wired networks, like with fiber-optic networks, while asymmetric connections are used in wireless networks, such is the case with mobile data.
Bandwidth is usually measured in bits per second (bps) or multiples, such as megabits per second (Mbps). A high bandwidth means more data can be transmitted in less time, which is absolutely essential for real-time multiplayer games.
RTT, or Round-Trip Time, measures the time it takes for a data packet to travel from the sender to the receiver and then back again. This is an essential metric in networked games as it affects the latency that players may experience during gameplay.
When RTT is high, players may experience delays which can negatively impact gameplay. Therefore, game developers should strive to minimize RTT to provide a smoother and more responsive gameplay experience.
A network delay (often referred to as "lag") is the time required to transmit a data packet from sender to receiver. Even small network delays can significantly affect gameplay in games with high responsiveness requirements, such as first-person shooters.
Although data is transmitted at speeds close to the speed of light, distance can still affect the system and cause delays. Delays often arise due to the infrastructure required for the Internet to function, and they cannot be eliminated. This can happen for reasons related to transmission through physical cables, delays in network devices such as routers and switches, and processing delays on sending and receiving devices. That said, this infrastructure can still be optimized to reduce delays.
Let's talk about how the means of data transmission impacts network latency. Data transmitted with light via optical fibers isn't transferred at exactly the speed of light. In reality, the light in optical fibers transmits slower than in would in a vacuum, since the material of the fiber has an effect on speed.
(The maximum speed of light is approximately 299 million meters per second or 186 thousand miles per hour, but again, this is only possible ideal vacuum conditions.)
So, with optical fiber, light transmits at a slower rate, relatively speaking. Let's also note that data transmitted through copper wiring is significantly lower compared to optical fiber because optical fibers have greater bandwidth and are less susceptible to interference than copper wires.
Route |
Distance |
Time (Speed of light) |
Time (Optical fiber) |
RTT |
---|---|---|---|---|
Amsterdam - London |
360 km |
1 ms |
2 ms |
4 ms |
Amsterdam - New York |
5850 km |
20 ms |
29 ms |
58 ms |
Amsterdam - Beijing |
7800 km |
26 ms |
39 ms |
78 ms |
Amsterdam - Sydney |
16700 km |
56 ms |
83 ms |
166 ms |
The table above assumes that data packets are being transmitted over optical fiber in a large circle between cities, which, in reality, is rarely the case. The routing of data packets most often has many intermediate points (”hops”), which can significantly increase data delivery time; each intermediate point adds a delay, and the actual travel time can be significantly increased. A data packet transmitted over optical fiber (at speeds approaching that of light) requires more than 150 milliseconds to complete the round-trip journey from Amsterdam to Sydney and back.
While people are not particularly sensitive to millisecond delays, research has shown that by the time we reach a 100-200 ms, the delay has already noticeable in the human brain. If it exceeds 300 ms, the human brain perceives it as a slow reaction.
To reduce network latency so that it doesn't exceed 100 ms, content needs to be made available to users as geographically close as possible. We must carefully control the passage of data packets and provide a clear path, with as little congestion as possible.
Jitter is a variation or "fluctuation" in network delays; it describes a change in the delay time between successive data packets. When data packets arrive at irregular intervals, this indicates network transmission instability. This can be caused by various factors, including network congestion, changes in traffic, and equipment deficiencies.
Even if an average delay is deemed acceptable, high jitter can cause problems, especially in real-time applications such as online gaming, or those involving internet telephony where delay consistency is essential.
If the amount jitter is too large, players may experience lag or "stuttering" when moving game characters or objects. This can also lead to packet loss, where data packets do not reach their destination or arrive too late to be useful.
Jitter can also affect the overall fairness of the game. For instance, if one player has high jitter and another does not, the latter will have an advantage because their actions will be registered and displayed faster.
Packet loss is a situation when one or more packets of data fail to reach their destination. This can happen for various reasons, such as network issues, traffic overload, or equipment problems.
In real-time games where such information is relevant, packet loss can cause noticeable problems, including the character "freezing," disappearing objects, or game state inconsistency among players.
Packet loss can lead to an outright interruption of gameplay, since necessary information may be lost during transmission.
Therefore, it's important to develop mechanisms to cope with packet loss or minimize its impact on gameplay.
The tick rate, or simulation rate, refers to the frequency at which the game generates and manages data each second. During a tick, the server processes the received data and performs simulations before sending the outcomes to the clients. The server then rests until the next tick. A faster tick rate means that clients will get new data from the server sooner, reducing the delay between the player and server and improving hit registration responsiveness.
A tick rate of 60Hz is more efficient than 30Hz because it decreases the time between simulation steps, leading to less delay. Additionally, this rate allows the server to transmit 60 updates per second, which reduces the round trip delay between the client and server by around 33ms (-16ms from client to server and another -16ms from server to client).
However, gameplay issues such as rubber banding, teleporting players, rejected hits, and physics failures may arise when the server struggles to process ticks within the allotted interval for each tick rate. For instance, if a server is set to a 60Hz tick rate but cannot complete the necessary simulations and data transmission within the approximately 16.67 milliseconds (1 second / 60) available for each tick, these issues can occur.
As we discussed in the sections on delay and packet loss, delay is a problem we need to address, and jitter makes creating a seamless gaming experience even more challenging.
If we ignore delay and don't take steps to mitigate it, we'll end up with a "dumb terminal." Dumb terminals don't need to comprehend the simulation they show the client; instead, they only send input data from clients to the server and receive the resulting state from the server to display.
This approach prioritizes accuracy, ensuring the correct user state is always displayed. However, it has several drawbacks:
Therefore, while the "dumb terminal" approach ensures accurate state representation, it can potentially lower the quality of the gaming experience due to its inherent limitations.
When we combine the chaos of RTT oscillations and jitter, the result is an undesirable gaming experience. Infrequent updates from the server, as well as poor network conditions, can cause visual instability. However, there are ways to minimize the impact of delay and jitter, like client-side interpolation.
With client-side interpolation, the client smoothly interpolates the state of objects over time instead of simply relying on their positions sent from the server. This method is cautious, as it only smooths the transition between the actual states sent from the server.
In a topology with a trusted server, the client can typically display a state that is roughly half of the RTT behind the actual modeling state on the server. However, for client-side interpolation to function correctly, it must lag behind the last state transmitted from the server. This results in a delay increase during the interpolation period. This time period should be shorter than the packet-sending period to prevent stuttering. Once the client finishes interpolating to the previous state, it will receive a new state and repeat the process.
To minimize the impact of non-periodic state updates, some developers use the extrapolation method, also known as Dead Reckoning (DR). This technique involves predicting a game object's future position, rotation, and velocity based on its last known values. For instance, if the player sends a packet every third frame with the object's current position, rotation, and velocity, Bolt's extrapolation algorithm can estimate where the object will be for the next three frames until new data arrives.
In this case, it's important to note that we can still use the same guessing method if a new packet doesn’t arrive as predicted. But the longer we guess into the future, the higher the chances of making an error; to address this, the DR algorithm utilizes "projected velocity blending" to make corrections once actual data is received.
Extrapolation reduces the need for artificial packet delays in gaming, resulting in faster displays of real-time actions for players. It also deals with lost or missing packets more effectively when working with games with many players. This means that missing position, rotation, and velocity information does not cause any delays in gameplay.
Although DR can be helpful, it is not as precise as interpolation. Additionally, using DR can be challenging if you are playing an FPS game and want to make authoritative shooting with delay compensation. This is because extrapolation, which involves estimating values, may cause variations in what each player sees on their screen. If you were using interpolated values, you could aim directly at a player moving perpendicular to you and still miss a shot.
Interpolation and extrapolation on the client side reduce delays, but the game can still feel "sluggish". This is where "Client-Side Prediction" comes in: immediately after pressing a button, the player character starts moving, removing the feeling of sluggishness. If done correctly, this prediction will be almost identical to the server's calculations.
Client-Side Prediction causes differences between what the server and client see. This can lead to "unexpected" visual effects. It is important to take into account unprocessed player actions and reapply them after each server update.
Despite improvements, there is still a significant delay between any server update and the moment the player sees it. This leads to scenarios where the player, for example, makes a perfect shot, but misses because they are aiming at an outdated position of another player. This is where the area of debate known as Lag Compensation begins.
Lag compensation is a controversial technique aimed at solving the problem where, for instance, a player makes a perfect shot, but because they were aiming at another's players “outdated” position, they miss.
The principle of lag compensation is that the server can recreate the world state at any time. When the server receives your data packet with information about the shot, it recreates the world at the moment of the shot and decides whether it hit or missed.
Unfortunately, lag compensation is susceptible to cheating. If the server trusts player-sent timestamps, a player can "trick" the server by sending a shot later but faking that it was performed some time before that.
For this reason, lag compensation should be avoided. The three techniques described above on the client-side do not imply trust from the server to the client and are not susceptible to abuses like this.
In this series, we'll explore all these techniques in more detail, when we will learn how to transmit data in the fastest, most compact, and reliable way.
Your players will be gaming from various devices, behind different router models, and serviced by a diverse selection of providers. Sometimes they'll be connected through an optical fiber cable for high-speed internet, other times, they might use a Wi-Fi connection, or even 3G mobile internet. This means the network conditions can vary widely, affecting latency, packet loss, and overall connection stability. As a game developer, it's crucial to understand these different environments and design your network handling to ensure the best possible gaming experience. A challenging task, no doubt, but properly done, an implementation of these practices as a high-level is what sets successful multiplayer games apart from the rest.
In the next section, we'll discuss the main data transmission protocols, like TCP, UDP, and WebSockets.