Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

> But this has actually proved to be very useful as it provided a standard medium of communication between programs that is both human readable and computer understandable. And ahead of its time since it automatically takes advance of multiprocessor systems, without having to rewrite the individual components to be multi-threaded.

Except it is completely unusable for network applications because the error handling model is broken (exit status? stderr? signals? good luck figuring out which process errored out in a long pipe chain) and it is almost impossible to get the parsing, escaping, interpolation, and command line arguments right. People very quickly discovered that CGI Perl with system/backticks was a very insecure and fragile way to write web applications and moved to the AOLServer model of a single process that loads libraries.



It's true that error handling with (shell) pipes is not possible in a clean way in general. In shell, the best you can do is probably "set -o pipefail", but that's only in bash. Concurrency with IO on both sides is really hard to get right even in theory.

Text representation is a good idea regardless of whether you pipe or not.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: