Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

It would be really cool if it didn't just show the ping, but how much worse it is compared to the theoretical optimum (speed of light in fiber optic medium, which I believe is about 30% slower than c).

I raise this because I've been in multiple system architecture meetings where people were complaining about latency between data centers, only to later realize that it was pretty close to what is theoretically possible in the first place.



I'm under the impression that within the hyperscalers (and probably the big colo/hosting firms, too), this is known. It's important to them, and customers, especially when a customer is trying to architect an HA or DR system and needs to ensure they don't inadvertently choose a region (or even a zone that isn't physically in the same place at other zones in the same region) that has "artificially" (can be for all kinds of legitimate reasons) latency from the primary zone.

This is not an uncommon scenario. My current employer specializes in SAP migrations to cloud and this is now a conversation we have with both AWS & GCP networking specialists when pricing & scoping projects... after having made incorrect assumptions and being bitten by unacceptable latency in the past.


Doesn't look like this is a ping[0]! Which is good. Rather it is a socket stream connecting over tcp/443. Ping (ICMP) would be a poor metric.

[0] https://github.com/mda590/cloudping.co/blob/8918ee8d7e632765...


ping is synonymous with echo-request, which is largely transport agnostic.

but you're right


why 443? are you assuming ssl here? serious question, I'm not sure. But if it is, wouldn't it be hard to disregard the weight of SSL in the metric?


The code closes the connection immediately after opening a plain TCP socket, so no SSL work is done. Presumably 443 is just a convenient port to use.


tcp/443 is likely an open port on the target service (Dynamodb based on the domain name). TLS is not involved.

ICMP ECHO would be a bad choice as it is deprioritized by routers[0].

[0] https://archive.nanog.org/sites/default/files/traceroute-201...


The script connects to well known 'dynamodb.' + region_name + '.amazonaws.com' server that expects HTTPS


You would have to map out the cables to do that.

Light in fiber optic cable travels roughly 70% of the speed of light ~210,000 km/s Earth's circumferences is ~40,000 kilometers. Direct route from the other side of Earth to another would be roughly 100 milliseconds, round trip 200 ms.


It’s pretty trivial to do this, any big fiber company will provide you with Google Earth KMZ files (protected by NDA) when considering a purchase. This is absolutely necessary when designing a redundant network or if you want lower latency.


Since light travels at 100% the speed of light in a vacuum (by definition), I have wondered if latency over far distances could be improved by sending the data through a constellation of satellites in low earth orbit instead. Though I suspect the set of tradeoffs here (much lower throughput, much higher cost, more jitter in the latency due to satellites constantly moving around relative to the terrestrial surface) probably wouldn't make this worth it for a slight decrease in latency for any use case.


Hollow core fiber (HCF) is designed to substantially reduce the latency of normal fiber while maintaining equivalent bandwidth. It's been deployed quite a bit for low latency trading applications within a metro area, but might find more uses in reducing long-haul interconnect latency.


Absolutely! The distance to LEO satellites (like spacex or kuiper) is low enough that you would beat latency of fiber paths once the destination is far enough.


In the past we just had line of sight microwave links all over the US instead.

I think it's just too damn expensive for your average webapp to cut out ten milliseconds from backend latency.


Yes. There are companies that sell microwave links over radio relay towers to various high frequency traders.


I am pretty sure this was one of the advertised strength of Starlink. Technically the journey is a bit longer, but because you can rely on the full speed of light you still come out ahead.


Cable mapping would be nice but 100ms is a meaningfully long amount of time to make straight-line comparison worthwhile


clicking around that map, I don't see any examples where the latency is a long way out of line with the distance.

Obviously it's theoretically possible to do ~40% better by using hollow fibers and as-the-crow-flies fiber routing, but few are willing to pay for that.


The 'practical' way to beat fiber optics is to use either

(i) a series of overground direct microwave connections (often used by trading firms)

(ii) a series of laser links between low altitude satellites. This would be faster in principle for long distances, and presumably Starlink will eventually offer this service to people that are very latency sensitive


Low-bandwidth/low-latency people tend to also demand high reliability and consistency. A low-orbit satellite network might be fast but, because sats move to quickly, cannot be consistent in that speed. Sats also won't ever connect data centers other than perhaps for administrative stuff. The bandwidth/reliability/growth potential just isn't there compared to bundles of traditional fiber.


> Low-bandwidth/low-latency people tend to also demand high reliability and consistency.

For trading applications, people will absolutely pay for a service that is hard down 75% of the time and has 50% packet loss the rest, but saves a millisecond over the fastest reliable line. Because otherwise someone else will be faster than you when the service is working.

They can get reliability and consistency with a redundant slower line.


Can you provide a source to this statement? The redundancy needed to transmit at desirable reliability with 50 % packet loss would, I imagine, very quickly eat into any millisecond gains -- even with theoretically optimal coding.

Someone more familiar with Shannon than I could probably quickly back-of-the-napkin this.


Financial companies have taken and upgraded/invested in microwave links because they can be comparatively economical to get "as the crow flies" distances between sites:

https://www.latimes.com/business/la-fi-high-speed-trading-20...

https://arstechnica.com/information-technology/2016/11/priva...

https://en.wikipedia.org/wiki/TD-2#Reemergence

I'm not sure about the high packet loss statement, but it wouldn't suprise me that it's true if the latency is lower enough to get to take advantage of arbitrage opportunities often enough to justify the cost.


Traders wouldn't use redundancy etc. Whenever a packet with info arrives, they would trade on that info (eg. "$MSFT stock is about to go down, so buy before it drops!"). If there is packet loss, then some info is lost, and therefore some profitable trading opportunities are missed. But thats okay.

There are thousands of such opportunities each second - they can come from consumer 'order flow' - ie. information that someone would like to buy a stock tells you the price will slightly rise, so go buy ahead of them and sell after them in some remote location.


There is also a market for stocks that trade on different exchanges, resulting in fleeting differences in price between exchanges. Those who learn of price moves first can take advantage of such differences. In such cases, all you need to transmit is the current stock price. The local machine can then decide to buy or sell.


There's definitely a few billion a year in revenue for Starlink if they sell very low latency, medium bandwidth connections between Asia, the US, Europe and Australia to trading firms. Even if the reliability is much worse than fiber.


Starlink latencies sadly aren't competitive due to the routing paths it uses. And sadly there are currently no competitors to starlink.


The routing paths traveling via ground stations, you mean? My understanding is that they were experimenting with improvements to this, they just haven't deployed anything yet.


A radio will beat starlink on ping times. Even a simple ham bouncing a off the ionosphere can win out over an orbiting satellite, at least for the very small amounts of data needed for a trade order. The difficulty in such schemes is reliability, which can be hit-or-miss depending on a hundred factors.


No, even with proposed inter-satellite routing paths, they are too slow. The trading industry has very much done the math on this.

The comparison is against radio and hollow-core fiber, not conventional fiber.


Laser links between satellites have been active since late 2022, or was there some additional improvement you're referring to?


I haven't kept track of that, but there is no other improvement. Even with the straightest possible laser links in space, they are too slow.


> sats move to quickly, cannot be consistent

Satellites in geostationary orbit are a (very common) thing.


Geostationary is so much further than LEO though so worse latency


AU <-> South Africa & South America is way less than distance.


Author here - Interesting. Someone on X also gave this idea to me. Any good resources for how to accurately compute this?


The theoretical best latency would be something like speed_of_light_in_fiber/great_circle_distance_between_regions, both of which are pretty easy to find. The first is a constant you can look up, and the second you can compute from coordinates of each region pair.


Thats what we did as well, via wolfram alpha. I.e. we were too lazy to look up everything ourselves and just asked it straight up how long of a roundtrip it would be between two destinations via fiber. We checked one result and it was spot on. This was six years ago tho


IIRC about 125 miles per ms




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: