"... an ongoing effort to obsolete the old, cross-Unix standard network administration and diagnosis commands of ifconfig, netstat and the like and replace them with fresh new Linux specific things like ss and the ip suite."
Old does not mean bad but Linux chooses that path as it moves further away from UNIX and being a Unix-like system in its pursuit to be Windows.
A bird in the hand is worth two in the bush. The article speaks of old software but, instead of fixing what it has and being standards compliant, it chooses to break away and do its own thing along with the new bugs and other issues which will surely arise.
> "instead of fixing what it has and being standards compliant, it chooses to break away and do its own thing along with the new bugs and other issues which will surely arise."
Well, people are still free to maintain and improve those older tools and bring them up to par with newer standards, if they want to.
Others would argue it's time to start phasing out code and so-called "standards" we've had since the early 90s in spirit of refactoring various elements in the ecosystem. As with most such efforts - lots of code will be thrown away, some of it will be re-used and re-purposed. I'd argue that in the long run - it's probably a necessary intervention.
As in everything else important, there is a balance to be stricken between stagnation and upheaval.
When choosing platforms, how communities weigh this balance is a very important consideration. You want to associate with people and groups who value essentially the "reasonable breeze" mode that keeps life flowing through extant systems. "Natural disasters" may sometimes be justified to help clear out the way-too-old (e.g. systemd or SMF), but they should be rare; systems can only tolerate major disruptions like that once or twice a decade.
Linux has an "anything goes" approach, which is usually more symptomatic of the age/maturity of the crowd it draws in than a reasoned decision about where to fall on this balance. That's not necessarily bad, but it makes more of an experimental atmosphere than something reliable, trusted, and stable.
They don't need to change the frontend to change the backend. There's a certain cognitive load to having to re-learn basic commands, and a certain annoyance when those changes are of little benefit to my day-to-day sysadmin duties.
"others" would be right, but the present team doing the refactoring is actively, aggressively incompetent, doesn't listen to even constructive feedback, closes critical bugs with snarky comments, refuses outside contributions, claim outside contributions are literally impossible, and absorbs independent functionality in the name of empire building.
So the notional idea of 'refactoring' being better -- yes! But with the constraint being that the people doing the refactoring have to be at least as wise, tasteful, mature, sensible, and thoughtful as the people who came before.
Do I have to state the obvious? This is an open source effort. Not all attempts at persuasion bear fruit. If you feel you are being ignored (by what would appear is a group of maintainers you absolutely despise), I guess the other option would be to fork the repo. This would not be the first time in history people split code over disagreements.
Unfortunately, Red Hat have politically suborned all of the major distributions, and have rapidly and effectively created dependencies between their solution and other codebases in a transparent effort to create and maintain lock-in and to build barriers against the 'fork' strategy.
What are you talking about? Systemd (pid 1) just does socket activation, it has no high level functionality. Yes, there are related programs like systemd-timesyncd, but they are completely separate processes, and can be managed separately and even omitted altogether.
Now, traditional BSD inetd had discard, chargen, echo, ... protocol support, which means that it was actually more integrated that systemd.
I would like to argue with you, but then again, systemd exists (and is thriving) while being the most windows-like software existing by default on all popular linux distributions.
EDIT: I guess I have to clarify this statement a little; In Windows the concept of "logging" is up to the event manager, this mirrors journalctl. To be more precise they seem to focus on desktop operating systems with an example being the unit file itself: "The syntax is inspired by XDG Desktop Entry Specification .desktop files, which are in turn inspired by Microsoft Windows .ini files.", maybe you prefer talking about firewalld and it's "Zones" which are clearly inspired by desktop use. (Laptops more specifically).
There was something that I'm forgetting though, a terminology that was only used by microsoft to describe something and systemd used exactly the same terminology. It was very esoteric terminology though, I'm trying to find a reference.
Old does not mean bad but Linux chooses that path as it moves further away from UNIX and being a Unix-like system in its pursuit to be Windows.