Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I think I will be fine, thanks. I'll stick to my shell scripts, so far they've outlived any other devops fad.


I worry when I can't tell if a comment like this is based on fact or just trying to be funny.

Because I've seen my share of nasty "legacy" automation but, surprisingly, I still think a good set of well thought-out shell scripts written by someone that understands what's being automated still beat modern tools, even when the person doing the automation is the same.

I don't quite know why this is, but there's something timeless about shell scripts. I've also seen shell script automation survive for a long time unattended and with zero issues. Not so with some of the modern tools that are supposed to be all unicorns and rainbows.


My take is that the shell script does not have any unspoken assumption or magic that is performed by the tool.

It all has to be in the script building up strictly from well-understood and long stable basic bricks (and the few places where you don't it's even worse with devops tools.)

Any issue, any question can be answered by reading the damn shell script and you're never dependent on a cookbook/recipe/playbook/component that you got off of some github repo that you need 5% of to do X.


An un-researched opinion from someone who replaced chef/ansible/puppet with his own shell-script config-management system:

I don't have to rewrite my shell-scripts every 6 months when a new version comes out. New updates usually only happen when security issues arise.

Shell-scripts tend to be simple. There's not a lot of magic hand-holding going on, which means not a lot of complexity to break things.

It keeps you from getting too abstract. Your writing pretty close and specific to what you want it to do, not "how it should be".

They are typically standalone. It's really easy to have 1 script that solves one problem, and another script that solves another. You don't need a giant code-infrastructure to keep things going.

I think config-mgmt tools can be extremely useful if your running a widely-ranged environment. But, you probably shouldn't be running a widely-ranged environment. If you keep things simple, and run as homogeneous as possible, you probably don't need all the added complexity.


> I think config-mgmt tools can be extremely useful if your running a widely-ranged environment. But, you probably shouldn't be running a widely-ranged environment. If you keep things simple, and run as homogeneous as possible, you probably don't need all the added complexity.

This applies to small environments. If the environment is large the situation almost reverses.

Deploying automation throughout a large homogenous environment is where config-management tools really shine. They make it easy to ensure homogeneity is maintained (even if that just means ensuring all machines have the same set of shell scripts) and allow grouping for staggered updates.

If the environment is widely-ranged and large, the utopia starts to break down. Their configuration explodes in complexity and (if you're not careful) you end up with mostly the same amount of work as if they were managed as small independent environments. With the added risk that there is now a single place from where you can break everything at once.

And this happens... Usually from wrong assumptions of what's common between all machines in the environment. In homogeneous environments almost everything is common, but in widely-ranged environments you sometimes add some configuration that wasn't there before and you think applies to the whole set and all hell breaks loose. If you're lucky this will happen suddenly, if you're not, breakage will spread slowly and you'll spend quite a lot of time scratching your head on why.


Well, it depends ;)

I don't think large/small is a good deciding factor. You can be large and homogeneous, or small and diverse. I think similar/dissimilar is a better decider for config-mgmt vs shell-scripts.

I'd argue that config-mgmt usually does a better job if your setup is large and complex. No need to write a script that checks if it needs to install a .deb, .rpm, or whatever, if your config-mgmt tools have already done that work.

Also, if you build your shell-scripts right, they can ensure that your system is kept the same.


Why not use Python instead? Shellscript is so... chaotic.


I pine for the days of yore when the Unix Philosophy was strong and pure, and every program did one thing well, and only one.

Like the way the shell would fork off an "expr" sub-process to parse a mathematical expression to add two numbers, then write the result to a pipe via stdout, then terminate the process, clean up all its resources, and switch context back to the shell, which then read the serialized sum back in from the other end of the pipe, and went about its business, regardless of the fact that the CPU running the shell already had its own built-in "add" instruction in hardware.


> I pine for the days of yore when the Unix Philosophy was strong and pure, and every program did one thing well, and only one.

Unless you are over fifty years old, you never experienced this.

Rob Pike said it best: "Those days are dead and gone and the eulogy was delivered by Perl."

Perl being a thing in 1995...


Now "expr" has finally been rewritten as a jQuery plug-in.

[1] https://github.com/baphomet-berlin/jQuery-basic-arithmetic-p...


This is the first coherent refutation of the "do one thing well" ethos I have ever read. Thanks for putting into words what I haven't been able to express myself.


Shell/batch scripting can often be useful in the devops world, where you have no guarantee that any additional tools (python, ruby, perl, powershell, whatever) wil be available.

Shell scripts are guaranteed to be runnable on all machines.

Unfortunately the shell "language" sucks, but still...


So you're running 'sh' scripts, right? none of that new-fangled bash stuff...

Oh, and be very careful of the commands your script invokes!

shell scripts have no guarantee of portability (often less than Python, which has a rich standard library available on all platforms).


Shell languages are great at doing interactive programming. Few non-shell languages can match the convenience, flexibility, and expressiveness in that domain. Of course, in any other domain (scripts) they are awful.


> Unfortunately the shell "language" sucks, but still...

Why do you say that? genuinely curious


Python + Fabric is a lovely devops/sysadmin toolbox.


Still no Python 3 support though


How do you write "foo | bar" in Python?


    from subprocess import Popen, PIPE
    p1 = Popen(["foo"], stdout=PIPE)
    p2 = Popen(["bar"], stdin=p1.stdout, stdout=PIPE)
    p1.stdout.close()  # Allow p1 to receive a SIGPIPE if p2 exits.
    output, _ = p2.communicate()
https://docs.python.org/3.5/library/subprocess.html#replacin...


Thanks. Maybe I'll make a public Gist that's a kind of "foo | bar" cookbook for different languages...


What you actually do in Python is use https://github.com/kennethreitz/envoy


That seems to recommend just calling the shell to do pipelines? Makes sense... Or maybe it parses that syntax itself?


I believe it parses it itself.


Looks like its not maintained.


bar(foo)

When you're not writing shell, just use the tools the language gives you.

For the matter, I think a shell script is cleaner than a python script for devops; but I don't think the composability of unix tools is that much of an advantage compared to the amount of python libraries out there.


When I use shell it's often exactly because I want to construct pipelines of processes and FIFOs and do all the other things that shell does very well and has done well for decades.

I'm likely to be using Python programs and other programs in those shell scripts. The beauty of shell is that it makes it so easy to compose programs written in different languages.


Shell does things well provided all the intermediate states are naturally expressible as streams of bytes. Otherwise not so much.

I think the advantages of using a single language for everything outweigh the disadvantages - see e.g. http://www.teamten.com/lawrence/writings/java-for-everything... (though actually my single language is Scala)


Everything in a computer is a stream of bytes... My shell scripts often use tools like jq and jshon to deal with JSON structures, etc. File hierarchies can also be very pleasant data structures.

The kinds of scripts I write would be awkward to have as compiled JVM programs, I think. Shell is just way more ergonomic for me for many tasks.


> Everything in a computer is a stream of bytes

Data can be meaningfully separated from control and structure in many cases, and failure to do that is a major (perhaps the major) source of security bugs.


Everything in a computer can be interpreted as a stream of bytes. For most things an object is a better interpretation.


You could then also criticize for example HTTP or even TCP for making you turn everything into "bytes".

Shell doesn't enforce any particular interpretation of data. Pipelines simply connect one program's output to another's input. Interpretation is up to the programs.


>You could then also criticize for example HTTP or even TCP for making you turn everything into "bytes".

If these were the only standard protocols that existed, and people were trying to tell me this was great because it's easy to compose different network applications, that criticism would be completely valid.

>Interpretation is up to the programs.

But because there are no standards beyond "stream of bytes", the chance that two independently written programs working with non-stream-like data can communicate directly is extremely low.


Lots of programs can communicate with JSON, XML, standard formats like that. If some legacy program outputs a non-standardized kind of output, that's a problem to be solved, not an inherent failure of shell scripting. There is no overarching successful solution to the problem of different programs using different data representations, but I don't blame this on shell; I work happily with shell scripts as do many many others. The same problem shows up the minute you want to use a Ruby module from Python, and rewriting everything in every language is not an economically viable solution.


> Lots of programs can communicate with JSON, XML, standard formats like that. If some legacy program outputs a non-standardized kind of output, that's a problem to be solved, not an inherent failure of shell scripting.

But the shell language itself is one of these legacy non-standardized formats. Arcane escaping rules, multiple incompatible implementations, surprising ways things get interpreted as code (e.g. the recent bash CGI bug),...


How often do you actually need to do that. 99% of `foo | bar` command could easily be `foo > a && bar < a` which is pretty trivial to do in Python.


Well, how do you write that in Python?

I'm curious because these simple things that are delightfully easy in bash often turn out to be surprisingly tedious in other languages.

Of course, some things are tedious in bash too. But a basic principle of shell scripting is that you call other programs to do the stuff you don't want to do in shell.


Like this: http://stackoverflow.com/a/1996540

I agree it is tedious, but to be honest, reading and writing to stdin/out isn't something that would commonly need to be done in a robust system. If the world were perfect you would use library functions.

I definitely think there is scope for a language that works well as an interactive shell, and as a general purpose language. They have somewhat conflicting constraints but I'm sure we can do better than Bash. Have you seen how [ is implemented?


No stdout or stdin for robust systems? I disagree.

Yeah, a nicer shell-like language would be cool. I've been thinking about it for a while.

Bash is quirky but it gets a lot of stuff right and once you understand it it can be extremely ergonomic and productive.

And not depending on language run times other than shell can be really glorious in some situations, too...


Now you have to clean up 'a'. And decide whether /tmp or /var/tmp or a dir on some other filesystem has enough space to hold all of 'a' until 'bar' is finished. Is it a security problem that other processes could snoop the contents of 'a' or even tamper with it?


Try python "sh"

https://amoffat.github.io/sh

It's pretty elegant.

import sh

sh.bar(sh.foo())


cos they run without needing python.


It's much easier to not need shell scripts, than to not need Python scripts.


Pffft, shell scripts. In Lisp I can emulate shell scripts and all of the technologies mentioned in the article with three to five macros.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: