packet compression
Racer1
Join Date: 2002-11-22 Member: 9615Members
From the "curious developer" in me...
What kind of compression you are talking about? Will you use a standard compression for all packets? Or compress differently based on the type of packet? Will you use an off-the-shelf library, or have a bit of fun and design your own? Huffman coding is common for limited data sets, but it also can potentially be counter-productive when unusual data shows up. Will you dynamically update the compression model as you go, or choose more rigid (but less CPU intensive) method with a fixed dictionary?
What kind of compression you are talking about? Will you use a standard compression for all packets? Or compress differently based on the type of packet? Will you use an off-the-shelf library, or have a bit of fun and design your own? Huffman coding is common for limited data sets, but it also can potentially be counter-productive when unusual data shows up. Will you dynamically update the compression model as you go, or choose more rigid (but less CPU intensive) method with a fixed dictionary?
Comments
If you use any compression that acts over the whole packet and it turns out that an unusual packet becomes bigger when you compress it, you send it uncompressed. If you're very stingy it doesn't even have to cost one extra bit; there is probably somewhere to expropriate 1 bit without harm(say, use the most significant bit of the sequence number to denote whether or not compression was used).
There is also lots of little tricks for data-specific lossy compression; like converting an angle from float to a 1-byte integer, which has a maximum error of 0.7 degrees(if it's just cosmetic, like the direction a player is facing, nobody is going to notice or care). If it's not cosmetic, such as a client telling the server which direction he is pointing his gun, you still might be able to get away with a 16-bit integer instead of a float; this gives you a maximum error of 10 arc seconds(corresponding to less than 0.5 mm error at 10 meter distance).
However!
In FPS games it's common to use the Quake III networking model. It's a 100% unreliable protocol. The entire thing uses UDP. You serialize the entire world state into a huge bit vector. For each client you have a copy of the last bit vector (aka: world state) that they received successfully. And you XOR these two together. Any bit that is unchanged between the two fields results in a 0. The result is a final bit vector where anything that was unchanged is all zeros, so you have tons and tons of zeros. Then! You use Run Length Encoding (RLE) to compress the final bit vector, which capitalizes on the fact that there are so many zeros.
Once the RLE compression is done you generally have a very tiny bit of data representing only the changes between the two world states. This is send to the client where the RLE compression is expanded back into that bit vector with tons of zeros, and the client still has the last bit vector that they received successfully, so they again XOR the two, and the output is the FULL world state which you then use to update your game objects. And if the transmission fails, the server just moves on and sends the next world state, no need to try and resent lost packets.
It's pretty damn spiffy :D
Anyway, I'm fairly confident they are using at least some derivative of this, b/c of Max's current task:
"Add support for reliable network messages"
I bring this up, b/c in the Quake III model, the compression is part and parcel with the protocol, so no further compression is needed.
Yes.
<!--quoteo(post=1799822:date=Sep 25 2010, 04:07 PM:name=spellman23)--><div class='quotetop'>QUOTE (spellman23 @ Sep 25 2010, 04:07 PM) <a href="index.php?act=findpost&pid=1799822"><{POST_SNAPBACK}></a></div><div class='quotemain'><!--quotec-->I would imagine that since there's more going on games may have swapped to TCP to ensure reliable delivery.<!--QuoteEnd--></div><!--QuoteEEnd-->
That's a non sequitur. If there's 'more going on' UDP is needed more than ever.
TCP will wait seconds if it has to in order to ensure reliable delivery. It will send a packet, wait for an ACK, if no ACK is recieved or a resend is requested(e.g. because the checksum failed) it will try resending the same packet again and again until it recieves an ACK, only then will it move on to the next packet. This is a train wreck in any networked game that approximates real time.
The correct behaviour in a real-time application is to keep pumping out packets without waiting for reply, never re-sending old and hopelessly outdated world states that were lost. That which has to be reliably sent comprises a small fraction of bandwidth(e.g. damage messages, player x killed player y), just resend that information in every packet until the client ACKs a packet that contains the reliable information.
This performs much better than TCP even for reliable messages. If your latency is 100 ms round-trip a packet that fails checksum will cause the client to request a resend and it will take ~100 ms to recieve it; a lost packet will mean waiting even longer. If the server uses UDP and just keeps pumping out 50 packets per second the cost of a missing or malformed packet is an additional ~20 ms delay before you recieve the reliable message.
If the server keeps pumping out 50 packets per second and two of them happen to arrive out of order(e.g. if the first packet takes 120 ms to arrive and the second packet takes 80 ms; such that the second packet arrives at the client before the first) that's no problem. All it means is that the client will recieve a world state that is a little more up to date, no big deal.
logic protocol is udp and it will always be used for best online gaming results.
Sounds like there are some pretty up to speed people adding to the conversation already, but how about the inventor of online gaming (basically).
Explain some more games programming stuff! Client side prediction, bsp tress (how games know what to drop and what not to), pathfinding :)
I may be asking too much /sadface
A server/client with fast CPUs but a slow link (such as your Internet connection) could reduce overall latencies and bandwidth requirements while a server/client with slow CPUs and a fast link (such as a LAN) would add latency but still save some bandwidth (although in that case it would serve no purpose). It would be necessary to choose a compression algorithm that aims to be very fast rather than compressing very well.
Delta encoding, or "delta compression" (a bit confusing to call it compression IMO but I suppose it's true) as Valve calls it on their wiki above is probably always a sane thing to do though (even if you also use a traditional compression algorithm) because it's fast and can save a lot of bandwidth and CPU cycles since instead of sending a full snapshot in every update you only send/receive what has changed since the last one.
Or if you mean delta encoding, then yeah it feels kinda weird calling it compression. :l
Look at it this way. TCP does not have access to special hardware or anything like that which could cause it to not lose packets. TCP and UDP are both built ontop of IP, with UDP being a much lower level of abstraction(and therefor more flexible).
<a href="http://twitter.com/#!/max_mcguire/status/25824902671" target="_blank">http://twitter.com/#!/max_mcguire/status/25824902671</a>