In the systems programming languages of old, fork is just easier than threading.
Fork model:
Step 1: Write a program to accept a single connection to a single TCP socket, then handle the request.
Step 2: Judiciously place a fork() call at the time of the new connection coming into the socket.
Step 3: Add an "if" statement to wait for another request if you happen to be the parent process after the fork.
You're done!
You just wrote a program capable of handling thousands of concurrent requests, with none of the concurrency nightmares that keep sensible men up at night. Going from the simplest case to the finished version was a two-line code change.
If the new task can work in isolation, then yes, fork seems ideal. If the tasks need to interact, then threads seem (to me) more useful.
I've written web services in the past with databases storing data (like most web applications do) and I've often wished that the potentially many processes could just be multiple threads in a single process instead, so I could have them just share an array of objects without the overhead of a database server.
Fork model:
Step 1: Write a program to accept a single connection to a single TCP socket, then handle the request.
Step 2: Judiciously place a fork() call at the time of the new connection coming into the socket.
Step 3: Add an "if" statement to wait for another request if you happen to be the parent process after the fork.
You're done!
You just wrote a program capable of handling thousands of concurrent requests, with none of the concurrency nightmares that keep sensible men up at night. Going from the simplest case to the finished version was a two-line code change.