Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Again, the response provided is "avoid sending a ping packet to each location", when you are about to send 50 packets per second for the duration of the game. This does not make sense to me, or the people who upvoted my remark.

But then you decide to believe we didn't read the article, and we wouldn't understand if you clarified. Ok dude! Have a rotten day too, what am I supposed to say?



Stop and think for yourself a bit.

Your game launches with 1M+ players joining at the same time. Can you think of any reason why having 1M players all pinging the same ping servers in the datacenter at the same time might not be a great idea?

What if a ping server goes down? Great. Now people can't join servers in that datacenter. You've created an additional component to your architecture that needs to be up and working.

Oh, you'll just have multiple ping servers per-datacenter to fix this via redundancy? Not so fast, if hash modulo n is real as described in the article, each of these ping servers can have different performance, even though they're in the same physical datacenter, because they have different IPs.

Oh you'll just have one anycast address per-datacenter? No. Most game devs don't have the resources to actually create and maintain their own network for their games, and instead, host servers in a mix of bare metal and cloud providers. Implementing a unified anycast approach across multiple providers is probably a no go. Game devs make games not networks.

OK you'll just have multiple ping servers per-datacenter and take the minimum value? It can work, and it's better than taking the average but now you're sending n x m pings where n is the number of datacenters and m is the number of ping servers per-datacenter.

You'll only ping ping servers near the player? Unfortunately, can't just ping the nearest few datacenters because ip2location isn't foolproof and a non-trivial amount of players will end up at the wrong location or even null island (0,0) lat long, thus 50ms is not a sufficient time to wait for the the RTT, even if you sent only one ping packet, it would be more like 250ms at minimum, and realistically at least 1 second, because you're sending packets over UDP and it's not reliable.

You'd probably also want to extend this to 10 seconds because Wi-Fi often has significant low frequency jitter which gives a false high RTT reading if you are unlucky, and the period of this jitter is often several seconds long. This brings you up to around 10 seconds pinging time realistically across which you'll be taking the minimum RTT seen which is most indicative of the true RTT between client and ping server for the current route.

Also, many network accelerators (not anycast ones like AWS Global Accelerator, but active probing based ones) require spin up time and need to perform their own pings, perhaps for an extended period of time like 10 seconds for the same reasons as above to find the correct route. Add this your own ping time as above and now you could be pinging for 20 seconds before getting a realistic route to the ping server. How many players do you know that are willing to wait 20 seconds in a lobby before playing a game?

This spin up time and probing can also become quite expensive for network accelerators that use active probing to find the best route, and is best avoided. Perhaps the key thing you are missing here is that many active probing network accelerators don't accelerate all players, just the ones that are having the bad network performance at any time (around 10% of players at any time), and thus the load of all players doing pings is non-trivial relative to this. Think egress bandwidth for pongs as well.

Next, if the internet has the property that the hash modulo n is real, like I describe in the article, even if you could do the pings in a reasonable amount of time, and the cost wasn't an issue for you -- WHY would you then do it in a way that results in a significant disconnect between the measured latency to the ping server for, and the actual latency in-game for some players? And why would you want to put in a noisy input that can fluctuate when you can have a rock solid, steady input across a long period of time like 1 day that is actually representative of the topology of the internet?

Especially, why would you do it, when you'll be tracking latency per-player per-match at minimum for your own visibility already, and you could just batch process this data at the end of the day to take the average latency at each (lat, long) and output a greyscale bitmap where [0,255] indicates the latency from that lat/long square to the datacenter. You just need one grayscale image per-datacenter.

Now you can look up latency from any lat long square to any datacenter in zero time. It's a steady input, and it's the accurate post-acceleration RTT value you can most likely get for players in that lat/long square to each datacenter in question.

And finally, why would you even care so much about the ping server approach, when I've already told you it converges to the latency map anyway. FFS.


> Especially, why would you do it, when you'll be tracking latency per-player per-match at minimum for your own visibility already, and you could just batch process this data at the end of the day to take the average latency at each (lat, long) and output a greyscale bitmap where [0,255] indicates the latency from that lat/long square to the datacenter. You just need one grayscale image per-datacenter.

Given lat,long is a guess and even if it is accurate, doesn't correspond very well to network latency, why wouldn't you use something like the source /24 or /48 rather than lat,lon. You don't get a pretty picture that way, I guess.


I'm not reading a 3-page comment that starts with "stop and think yourself for a bit". Would you?

Someone else let me know if I assumed wrong, but no thanks.


FIN




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: