Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

These threads are never productive. Running rm -rf / is going to offer multiple interesting ways to make your life miserable, eg: a mounted FUSE filesystem, an NFS folder pointed somewhere important, Samba mounts from your network automatically connected from your desktop, etc. I wouldn't be surprised if you could nuke firmware off a device by deleting the appropriate file in /sys/.

Systemd hate is en vogue these days so they are an easy and common target. Why no invective towards the kernel that actually implements EFI-as-a-filesystem?



Sometimes people run rm -rf / just for fun before reformatting a system, just to see what happens. Given that its purpose in life is to delete files, it stands to reason that running this command on a system which contains no important data is OK. A default configuration which makes this command destroy hardware is not reasonable.


rm -rf / probably doesn't do what you think it does. The coreutils version of rm includes --preserve-root[1], which is the default[2].

It also supports --one-file-system, which would prevent this and a host of other problems as well. That said, I don't really see a problem with Lennart's response. It's basically, "we should take steps to make this hard to do, but root is capable of doing anything, so don't expect it to be foolproof."

1: https://www.gnu.org/software/coreutils/manual/html_node/rm-i...

2: https://www.gnu.org/software/coreutils/manual/html_node/Trea...


Is there some response besides the one linked here? Because all I see here is basically saying, "We're not going to change anything, this is not a problem, remount it yourself if you don't like it."


He says "The ability to hose a system is certainly reason enought to make sure it's well protected and only writable to root." That looks like agreement that it needs something done to me. I took the followup comment to be a useful to on how to mitigate the problem until then.


Isn't it already only writeable by root? The request is to make it not even writeable by root without taking some additional action to make it writeable first. If I understand correctly, that bit you quote just describes the current situation.


I take the and in his reply to infer he agrees something additional should be done. Whether his ultimate response for how to do that is adequate is unknown, since he didn't really elaborate on exactly what he thinks should be done. It could be he meant that to signify agreement with the suggested course of action in addition to agreement that there's a problem to be fixed.

In any case, the only thing clear to me from his statements is that he agreed that there was a problem in need of attention, which makes the response here somewhat baffling to me (although not as baffling as it should be. It's fairly easy to see how a lot of the animosity comes from feeling about systemd in general and Lennart in particular, especially since some people state as much, as if that has any bearing on his response in this instance).


There are a variety of ways root can hose a system and many are recoverable without replacing hardware. Hard bricking a motherboard is a whole different league.


It's funny you mentioned that rm's purpose in life is to delete files. I'm reminded of a common saying in the unix world.

Everything is a file.


This is a good example of why you shouldn't take that saying too far.


The issue here is actually bricking the device. It's not your standard "do we let people hang themselves or limit the user's capabilities" debate.

A filesystem is a perfectly reasonable way to implement access to the EFI vars.


> Systemd hate is en vogue these days

To be fair, Lennart Poettering could probably end war and famine forever in a single day, and some people would still find something wrong with that.

On the other hand a certain amount of skepticism and criticism is very much in order. (I am saddened, though, by the way online discussions so easily deteriorate into name-calling and bitter ranting.)

Just to be clear, I was highly skeptical of systemd initially, now I have two systems running Debian Jessie, and quite honestly I haven't noticed much of a difference one way or the other. So at least now I am more confident that we are not all going to die because of systemd. On the other hand, I wish there had been more initial coordination / portability work to make sure stuff like GNOME keeps working smoothly on *BSD as well.


On the other hand I was happy and welcoming of systemd then I started to experience way too many breakage due to attitude of the devs.

When you have to spend 2 days to get to a remote server room and back because suddenly systemd decided that a device in fstab not present at boot would halt boot with error and at the same time the emergency shell was broken and going into a loop of asking for credentials, on a debian server, in production.

Ok systemd works most of the time but after experiencing a few of these breakages I've become wary of anything systemd. Whatever it brings to the table is not worth the wasted time and headaches it causes.


Oh my, that is bad indeed.

Apart from the two-item sample of personal experience I am kind of torn about systemd - some of the problems it attempts to address are real, and addressing these in general seems like a good idea. The way it does so, however, sometimes (I am being deliberately vague here) brings along some problems of its own.[1]

And the further systemd adoption and systemd's mission creep go along, the harder it becomes to backtrack and replace it in case somebody comes up with a better solution.

[1] Like I said, I have not experienced any of these problems myself, but I have read a couple of reports from people that had problems with systemd that were definitely not just aesthetic.


> I have not experienced any of these problems myself

That's the problem any time[1] a system is designed around ideological purity.

Someone invents something clever that works most of the time for most people (with varying definitions of "most"), so they try to apply it everywhere. As no idea can account for everything[2], this sooner or later the inflexibility of the theory meets the variation of realty and some type of drama happens.

Usually it's better to avoid anything that exhibits that kind of inflexibility and hubris. Systems that build in ways of handing the unexpected or patching around their own weaknesses are necessary or drama is inevitable.

[1] Not just computer systems - this is true most of the time humans build systems. We see the same problems in religion, politics, and social constructs. It's such a common behavior, I suspect there may be an evolutionary basis for it (using a single common rule for many situations requires less energy).

[2] Even the physicists are still working on that problem.


Agreed. Which is why I said On the other hand a certain amount of skepticism and criticism is very much in order.

The problems systemd tried to address are real, and some of the ideas behind it are appealing. But the way it attempts to replace several important pieces of the system at once, and the way it is being forced on people in a "eat it or leave it"-style feels uncomfortable.

Part of what made Unix what it is today is the idea to build a system that might be very far from perfect but that is easy to improve in small, incremental steps so one quickly gets a feeling for what works well and what does not.


Frankly i feel that way to many sub-projects within systemd is at this point "i can rewrite this faster than get patches past the maintainers". Often with the "hilarious" outcome that problem that was solved a decade ago crops up again in the new implementation within systemd.


Out of curiosity, what do you consider to be real problems that systemd is trying to address? (Honest question).


For one thing, a method service management that does not just fire off commands but monitors services and restarts them if they fail. Also, doing this with an idea of dependencies of services.

Event-based service management is interesting, too, e.g. shutting down a network service when the machine is disconnected from the network and restarts them when a connection becomes available again (think NTP, DHCP clients).

Once you have an idea of how services depend on each other, you get the ability to start services in parallel for free (whether that is so useful or even a good idea is another question).

Given the fact that systemd is hardly the first attempt to solve these issues (think SMF on Solaris, launchd on OS X, or the couple of attempts on GNU/Linux), I think a lot of people have felt the itch to improve on the classical SysV init.


Because everything is a file in unix derivative land. That is a common accepted practice at least but optimizing for the common case here is the right thing to do. How often do people need to manage EFI things through the filesystem and how much of an inconvenience is it to have the initial mount be read-only? The answer is obvious to me because I know people run `rm -rf` often enough that having a safeguard in place is the right thing to do.


A file maybe, a filesystem no.

You can't 'ls' your network connections.

And for tricky API that does not match the filesystem view we have the fnctls.


> You can't 'ls' your network connections.

You sure can in Plan 9. Unfortunately (?) Plan 9 is more Unix than Unix itself.


With fuse you certainly can.


Yeah. Never coded a driver did you?

It is as smart as saying that by using a hammer heavy enough you can make a cube fit in a triangle. It is possible, but defeating the purpose of consistent API.


Yeah but you can normally recover from that without a soldering iron. Badly implemented EFI is out there.


This is not what actually happened when this was reported on arch linux forums: https://bbs.archlinux.org/viewtopic.php?id=207549




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: