When you're not writing shell, just use the tools the language gives you.
For the matter, I think a shell script is cleaner than a python script for devops; but I don't think the composability of unix tools is that much of an advantage compared to the amount of python libraries out there.
When I use shell it's often exactly because I want to construct pipelines of processes and FIFOs and do all the other things that shell does very well and has done well for decades.
I'm likely to be using Python programs and other programs in those shell scripts. The beauty of shell is that it makes it so easy to compose programs written in different languages.
Everything in a computer is a stream of bytes... My shell scripts often use tools like jq and jshon to deal with JSON structures, etc. File hierarchies can also be very pleasant data structures.
The kinds of scripts I write would be awkward to have as compiled JVM programs, I think. Shell is just way more ergonomic for me for many tasks.
Data can be meaningfully separated from control and structure in many cases, and failure to do that is a major (perhaps the major) source of security bugs.
You could then also criticize for example HTTP or even TCP for making you turn everything into "bytes".
Shell doesn't enforce any particular interpretation of data. Pipelines simply connect one program's output to another's input. Interpretation is up to the programs.
>You could then also criticize for example HTTP or even TCP for making you turn everything into "bytes".
If these were the only standard protocols that existed, and people were trying to tell me this was great because it's easy to compose different network applications, that criticism would be completely valid.
>Interpretation is up to the programs.
But because there are no standards beyond "stream of bytes", the chance that two independently written programs working with non-stream-like data can communicate directly is extremely low.
Lots of programs can communicate with JSON, XML, standard formats like that. If some legacy program outputs a non-standardized kind of output, that's a problem to be solved, not an inherent failure of shell scripting. There is no overarching successful solution to the problem of different programs using different data representations, but I don't blame this on shell; I work happily with shell scripts as do many many others. The same problem shows up the minute you want to use a Ruby module from Python, and rewriting everything in every language is not an economically viable solution.
> Lots of programs can communicate with JSON, XML, standard formats like that. If some legacy program outputs a non-standardized kind of output, that's a problem to be solved, not an inherent failure of shell scripting.
But the shell language itself is one of these legacy non-standardized formats. Arcane escaping rules, multiple incompatible implementations, surprising ways things get interpreted as code (e.g. the recent bash CGI bug),...
I'm curious because these simple things that are delightfully easy in bash often turn out to be surprisingly tedious in other languages.
Of course, some things are tedious in bash too. But a basic principle of shell scripting is that you call other programs to do the stuff you don't want to do in shell.
I agree it is tedious, but to be honest, reading and writing to stdin/out isn't something that would commonly need to be done in a robust system. If the world were perfect you would use library functions.
I definitely think there is scope for a language that works well as an interactive shell, and as a general purpose language. They have somewhat conflicting constraints but I'm sure we can do better than Bash. Have you seen how [ is implemented?
Now you have to clean up 'a'. And decide whether /tmp or /var/tmp or a dir on some other filesystem has enough space to hold all of 'a' until 'bar' is finished. Is it a security problem that other processes could snoop the contents of 'a' or even tamper with it?