Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

fork() is also used to daemonize and for privilege separation, two tasks where posix_spawn() cannot be used. I suppose daemonization can be seen as something of the past, but privilege separation is not. On Linux, privileges are attached to a thread, so it should be possible to spawn a new thread instead of a new process. However, a privileged thread sharing the same address space as an unprivileged one doesn't seem a good idea.

The paper also mention the use case of multiprocess servers which relies heavily on fork() but dismiss it as it could be implemented with threads. A crash in a worker would lead to the crash of the whole application. While a worker could just be restarted.

A proper use case of removing fork() from an actual program would help. For example, how nginx on Windows is implemented?



I can’t answer for Nginx but normally on windows if you want “worker processes” you just start N of them and have them read work from a shared memory queue. That is, workers live longer than the tasks they perform. If one crashes, a new one is spawned. This does seem like a more sensible way of doing things than forking tbh. It isolates work in processes but doesn’t pay for process creation per request.


Is recovery of a shared memory queue after one of the workers crashes even possible, in general? (what if the worker crashed before releasing a lock?)


I’m not sure how this is usually done but I’d avoid locks at nearly any cost and try to use a lock-free spmc queue such as https://github.com/tudinfse/FFQ


I may be strange, but that's the way I've always used fork() as well. It's one of the reasons why named pipes exist (or at least that's what I've always thought).


"If one crashes, a new one is spawned."

I suppose that makes sense on an OS on which crashing is expected behaviour, though some people would want to know what bug caused the crash and whether that bug has security implications.


Crashing is an expected behaviour in Linux as well, you can enable coredumps or utilize an applications log if you want to know why.


The Linux kernel doesn't crash much, unless you have dodgy drivers or dodgy hardware. Whether your userland programs crash or not depends on what you're running. I don't expect to see sshd crashing, for example, though it's true that almost any program will exit suddenly if the system runs out of memory, which to an ordinary user looks like a crash, though it's a very different thing really.


If you weren't talking about userland crashes, then your crack about "an OS on which crashing is expected behavior" makes no sense.


The comment was not meant to be taken all that seriously, of course, but an OS is more than just the kernel, and I do tend to disapprove of brushing a crash under the carpet.

System runs out of memory, various processes get terminated, and the easiest way to get it back into a good state is a restart: not that worrying, but do you have a memory leak? Some process segfaults with 54584554454d4f53 in the PC: should be investigated, not glossed over.


There is, of course, absolutely no difference in how errors are handled in this case vs forks. A process that handles one million requests is much more likely to crash than a process that handles one request though (regardless of OS).


posix_spawn() attributes can do a lot of this, and a helper program can do much if not all of the rest.

Removing fork() will take a long, long time. Every popular use case needs an alternative that doesn't suck.

But then again, fork() is kinda awful[0].

[0] https://gist.github.com/nicowilliams/a8a07b0fc75df05f684c23c...




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: