Hacker Newsnew | past | comments | ask | show | jobs | submit | more packetlost's commentslogin

Latency is additive, so all that copper coax that and mux/demux in between a sizeable chunk of Americans and the rest of the internet means you're looking at a minimum roundtrip latency of 30ms if server is in the same city. Most users are also on Wi-Fi which adds and additional mux/demux + rebroadcast step that adds even more. And most people do not have the latest CPU. Not to mention mobile users over LTE.

Sorry, but this is 100% a case of privileged developers thinking their compute infrastruction situation generalizes: it doesn't and it is a mistake to take shortcuts that assume as such.


uh have you ever tried pinging a server in your same city? It's usually substantially <30ms. I'm currently staying at a really shitty hotel that has 5mbps wifi, not to mention I'm surrounded by other rooms, and I can still ping 8.8.8.8 in 20ms. From my home internet, which is /not/ fiber, it's 10ms.


If you've ever used speedtest.net, you've almost certainly been using benchmarking against a server in your own city, or at least as close as your ISPs routing will allow. Ookla servers are often specifically optimized for and sometimes hosted/owned by ISPs to give the best possible speeds. Google's DNS servers use anycast magic to get similar results. Basically no service you actually use outside of very, very large providers is likely to be anywhere near you an won't get that kind of latency, even with a very good ISP and LAN.

10ms is a best case for DOCSIS3.0/3.1, it means you have near optimal routing and infrastructure between you and the node or are using some other transport like ethernet that is then fed by fiber. I currently get 24ms to my local Ookla host a couple of miles away over a wired connection with a recent DOCSIS3.1 modem. Hotel internet is likely to be backed by business fiber. They're likely throttling you.

I worked for an ISP for several years, there's a huge range of service quality even within the same provider, zipcode, and even same location depending on time of day.


if I'm able to ping 8.8.8.8 in 10ms, doesn't that mean that my rtt latency to get out of the docsis-part of the network is 10ms, and that any latency beyond that will be the same regardless of docsis/fiber?


Yes, the majority of that latency is likely coming from DOCSIS. You're likely in a major city, large enough that Google has an anycast DNS server nearby.


latency != throughput


This is why I use Plan9's rc shell for a lot of my scripting needs. It's dramatically nicer to write but even more nice to read.


I'll second this. If the hardware it runs on isn't literally dying, and it doesn't have glaring security vulnerabilities (ex. especially air gapped systems), and it's still doing its job well, it's fine to let it do its thing.

That being said, the reality is that requirements change. Load changes. The world around the software changes. Systems need to be resilient, yet flexible enough to be maintained but not replaced over decades.


The skills required to maintain old codebases atrophy and we are barely training new people to do it so the skills pool shrinks. That means it doesn't get regular maintenance and it means disaster response in situations like this is slow and expensive.

Whilst it would be a major upheaval to switch to a clean room engineered implementation using 2025 best practices it would at least increase the talent pool that can work on it effectively.

There does likely come a point where it is cost effective to rebuild it both in reduced unplanned downtime and reduced maintenance costs.


Eventually, yes. But considering I worked at a place as recently as 2018 that had COBOL systems (and I know it's not unique in this regard, I'm confident those systems are still there too) from the 80s still running in production, I think that time horizon can be long.


If someone was on the original team writing this software when they were in their mid 20s then they are now in their 70s. If we don't start rebuilding some of this software soon there won't be anyone alive that understands it intimately in the way only an original author could.


That implies that no engineer who isn't (one of) the original author(s) cannot learn a system, to which I vehemently disagree. New engineers can be trained up on old systems and languages. The trouble is few want to because it typically ties their skillset heavily to a particular place of work, which is risky with current business culture.


Yes they can be trained up but a mechanic working on a car will never understand all of the undocumented design decisions that went into its production. They understand the what but not necessarily all of the why. Not only that but like you say it isn't a popular path meaning you aren't getting a broad (or deep) talent pool.


Welcome to the world of large legacy systems developed for and used by organizations that don't understand IT, and don't know that they don't understand IT.

The good news is there's always a contractor willing to promise the world and deliver something that doesn't work in 5-10 years. Your internal team, who could have finished the original job in a couple years if they'd been funded, will be the ones that end up making that delivered system actually work. But you'll tell everyone that it was the contractor's high quality output that did the trick, because saying they failed would hurt your career. In 15-30 years the system will get replaced and your successor will hear about how great <contractor> did the first time, and they'll get another shot at failure.


Exactly, we need to fund the people that have the knowhow now to either document it fully so it can be replaced later or to replace it now. Before that knowledge of building and operating it for so long is lost forever.


Oracle really does seem poised to be the winner here.


It was a shrewd move by Oracle.


Notion has really great ideas though, it's just so poorly implemented that it really hurts my desire to use it for anything unless forced to.


It was such a breath of fresh air in the beginning, when it was simple, elegant, and focused. Shame they had to cram it to the gills with half-considered cruft.


Yes. Lots of (moderately) wealthy westerners. I personally would move there if I could.


The point is reducing that reliance on side channels, including the topical ban on trading.


tbh I always found it interesting that CGI was dropped as a well supported technology from languages like Python. It was incredibly simple to implement and reason about (provided you actually understand HTTP, maybe that's the issue), and scaled well beyond what most internal enterprise apps I was working on at the time needed.


I'd be willing to bet this has more to do with more aerodynamic designs of cars than less bugs in general.


I believe the same decrease is visible when driving older (less aerodynamic) cars, but I don’t have any studies on hand


...or just ask a bus driver, van driver, euro truck driver, etc.

They've all seen the decline too.


I've also added Janet support to nvim-paredit [0] and use a combination of that, conjure [1], and nvim-parinfer to get a really slick editing experience in neovim.

[0]: https://github.com/julienvincent/nvim-paredit

[1]: https://github.com/julienvincent/nvim-paredit

[2]: https://github.com/olical/conjure



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: