It is unnecessarily complex to begin with. On top of that, the maintainers are historically not the most open to criticism and try to aggressively push the adoption. So much so that Gnome for example now has very strong dependecies on systemd which makes it very difficult to adopt Gnome on non-systemd systems unless you wanna throw a bunch of patches at it. This hard coupling alone is something that I wouldn't want to rely on, ever.
THIS. Also what problem does it solve that RC scripts can't accomplish? They are much more readable and less complex. What is the benefit of all that added complexity? Even more to the point the business case for it in a professional setting? I've been wondering that for a long time.
Barging in as a Linux guy interested to learn more about the BSDs, so please bear with me.
Something I love with systemd is how I can get very useful stats about a running process, e.g. uptime, cumulated disk & network IOs, current & peak mem usage, etc.
Also the process management (e.g. restart rules & dependency chain) is pretty nice as well.
Is that doable with RC (or other BSD-specific tooling) as well?
It's up to you to say check in your init script if you need to start another service before you.
In terms of uptime or IO and stuff, those metrics are already available. Be that via SNMP or other means. Say you start an nginx in systems, which network and disk usage does it report? Just the main process or all its forks? Same problem in RC.
But that is part of the point. Why in the ever-loving existence should an INIT system provide stats like disk usage? That is NOT what an init system is for.
If you need memory usage or IO usage or uptime, there are so many other tools already integrated into the system that the init system doesn't need to bother.
Init systems should only care about starting, stopping and restarting services. Period. The moment they do more than that, they failed at their core job.
This might came across stronger than meant to, but still holds true.
BSDs are about "keep it simple, keep it single purpose" to a "I can live with that degree". What you get though is outstanding documentation and every component is easily understandable. Prime examples are OpenBSD/FreeBSD PF. That firewall config is just easy to grok, easy to read up on and does 99.999% of what you ever need out of a firewall.
> which network and disk usage does it report? Just the main process or all its forks? Same problem in RC.
Well, the main process and its whole hierarchy, that's what you would expect of an init system monitoring its services, right? And what's nice with systemd is that I can get that from a simple `systemctl status my-service` – of course I could deploy a whole observability stack, but better if I can avoid it.
But there is no need to be defensive, it RC can that's nice, if it can't, then well, too bad.
> there are so many other tools already integrated into the system that the init system doesn't need to bother.
That's what I'd love to hear about, what are the equivalent in the BSDs world.
Best practice would be to pack the service into a jail and then use `ractl` to monitor I/O. Could also then monitor the VNET socket of the jail for network stats.
Or you just grab the PID and get it through that. A bit more manual, but composable.
Spin up a VM, may that be locally or a cloud VM, throw an OpenBSD or a FreeBSD. If you are into mail servers, static http etc then OpenBSD might be your jam. Or try FreeBSD and Jails. Jails are absolutely fantastic.
Ditch the LLMs (not insinuating that you use them, but just in case), try to use the Handbooks and the man pages.
If you ever feel the need that you have so many interdependent services that you need something more complex than RC, then you might have an actual architectural problem to be honest.
>If you ever feel the need that you have so many interdependent services that you need something more complex than RC, then you might have an actual architectural problem to be honest.
It autodiscovers the dependency chain or shit like that? If you got 500+ services that need to be orchestrated you honestly have a very different problem.
I love the simplicity of RC scripts. Easy to understand, easy to debug, it just fucking works.
Simplicity is king, because it's understandable. A behemoth like systemd feels like it requires a PhD.
Systemd also runs 100% against the Unix/Linux philosophy of composability and single purpose.
If you need to make sure that the network stack starts after the usb stack and that starts after the pcie stack and that starts after the … then systemd is considerably easier than SysV init.
You’re handwaving away something that is pretty important. You can say that having 500 services is its own problem but it’s also a reality, even on desktop operating systems.
Count how many services you have that are not already defined by the system itself in its order? So, remove core system services. System itself should already make sure that USB starts after PCIE and all that.
So go and count all the services that are not base install and tell me how many you have.
Addendum to my other reply: it comes down to the "not invented here" problem which always invites weirdly complex solutions to problems that don't exist.
Linux is "just" the kernel and every distro invites new solutions to perceived core problems whereas the BSDs have a whole base system that comes from one source, reducing the chance of a systemd popping up there. Both approaches have their ups and downs.
With both AoC and Project Euler I like seeing how fast I can get my solution to run with SBCL. Finding all palindromic primes below a million in less than a second is pretty neat.
SBCL is an absolute beast. I think only surpassed by LispWorks, but SBCL is a miracle machine. Even without type annotations it usually performs well enough.
That is the opposite of good. If only big companies can afford bespoke hardware, they literally become a monopoly, At home LLMs and other usages become either prohibitively expensive in purchase and running cost or you have to deal with a low quality model. Just admit that it's environmentally fucked and not open in any sense. This is not something that the community at large should strive for.
Problem is that the "good" developers have built up a lot of experience prior to LLMs and therefore can distinguish good from bad code and know the limitations. But if everyone is supposed to use and embrace AI, how will new developers build up this intuition?
This intuition comes with experience, LLMs or not. Because it’s not intuition, it’s being being able to understand the problem and the solution as code better than the LLMs. The problem is will businesses justify 5+ years of training so that inexperienced developers can reach that point? Few companies will be able to afford it.
And if someone only learns coding from LLMs do you truly believe that they understand the code they are working with? I highly doubt that.
And the experience comes from actually stepping through a debugger, from doing research and not the next best hallucination that some LLM cooked up. They can't even properly construct some CLI arguments and just make up flags that aren't even in the manpage.
Companies are short sighted then. I'd rather build up a good engineer and spent time on it than just sitting them down in front of a LLM.
You could send a good amount of small probes and make them become the big antenna dish basically. As long as you cover the bases, you can have layers of "big antenna dishes" in onion layers.
On the other hand though, updates on OpenBSD are the most painless updates I have ever done. I am more concerned about it's usage of UFS instead of something more robust for drives.
I'm grossly generalizing here, but it seems like OpenBSD boxes seem to be commonly used for the sorts of things that don't write a lot of data to local drives, except maybe logfiles. You can obviously use it for fileservers and such but I don't recall ever seeing that in the wild. So in that situation, UFS is fine.
(IMO it's fine for heavier-write cases, too. It's just especially alright for the common deployment case where it's practically read-only anyway.)
I've used it as a mail server, a web server, and a database (postgres) server. It's also my main desktop OS. Did/does fine, but I never really stressed it. I would certainly welcome a more capable filesystem option, as well as something like logical volumes, but I can't say that ufs has ever failed me.
You'll definitely want to have it on a UPS to avoid some potentially long and sometimes manual intervention on fscks after a power failure. And of course, backups for anything important.
> updates on OpenBSD are the most painless updates I have ever done
I see we have a post-syspatch (6.1 - 2017), post-sysupgrade (6.6 - 2019) OpenBSD user in our midsts. ;D
You are positively a newbie in the OpenBSD world !
Some of us are old enough to remember when OpenBSD updates were a complete pain in the ass in involving downloading shit to /usr/src and compiling it yourself !
According to Wikipedia, Debian has had apt since 1998.
My point is OpenBSD didn't have binary updates until well into the 2000's as mentioned above. Initially in 2017 with syspatch and the finally full coverage in 2019 when sysupgrade came along.
As you can see on some old OpenBSD Mailing List posts[1] there was a high degree of resistance to the very idea of binary updates. People even being called trolls when they brought up the subject[2] or being told they "don't understand the philosophy of the system"[3]
I just felt it was an important point of clarification on your original post. Yes, I agree, OpenBSD updates are painless ... now, today. But until very recent history they were far from painless.
I mean, I definitely love that OpenBSD is the "reference" implementation of PF, but the network performance definitely lacks a bit and UFS is quite outdated and prone to errors on outages. ZFS has the upper hand there and FreeBSD's network performance.
And adjusting from the reference PF to the minor differences on FreeBSD's version is not too bad.
reply