Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I'm always a bit shocked how casual people people wget and execute shell scripts as part of their install process.

This is the equivalent of giving an author of a website remote code execution (RCE) on your computer.

I get the idea that you can download the script first and carefully read it, but I think that 99% of people won't.





I’m always a bit shocked how seriously people take concerns over the install script for a binary executable they’re already intending to trust.

Between you and me, are a bunch of other hops. Blindly trusting dependencies is one part of why npm is burning down at the moment.

Why trust un-signatured files hosted on a single source of truth? It isn't the 90s anymore.


What’s your alternative?

A mirrored package manager, where signature and executable are always grabbed from different sources.

Like apt, dnf, and others.


Pretty sure my apt sources have the signing and package pointing to the same place

If you have more than a single source, then apt will already be checking this for you.

The default is more than a single source.


All of mine point to like somethingsomething.ubuntu.com

If it points to mirror.ubuntu.com, it'll be mirroring at host end, instead of inside apt. But as apt does do resolution to a list, it'll be fetching from multiple places at once.

> I’m always a bit shocked how seriously people take concerns over the install script for a binary executable they’re already intending to trust.

The issue is provenance. Where is the script getting the binary from? Who built that binary? How do we know that binary wasn't tampered with? I'll lay odds the install script isn't doing any kind of GPG/PGP signature check. It's probably not even doing a checksum check.

I'm prepared to trust an executable built by certain organisations and persons, provided I can trace a chain of trust from what I get back to them.


The thing that gets installed, if it is an executable, usually also has permissions to do scary things. Why is the installation process so scrutinized?

I think there's a fundamental psychological reason for this - people want to feel like some ritual has been performed that makes at least some level of superficial sense, after which they don't have to worry.

You see this in all the obvious examples of physical security.

In the case of software it's the installation that's the ritual I guess. Complete trust must be conferred in the software itself by definition, so people just feel better knowing for near certain that the software installed is indeed 'the software itself'.


It would raise the same kind of alert for me if someone used wget to download a binary executable instead of a shell script.

The issue is not the specific form in which code is executed on your machine, but rather who is allowed by you to run code on your computer.

I don't trust arbitrary websites from the Internet, especially when they are not cryptographically protected against malicious tampering.

However, I do trust, for instance, the Debian maintainers, as I believe they have thoroughly vetted and tested the executables they distribute, with a cryptographic signature, to millions of users worldwide.


Even assuming it’s not malicious, the script can mess up your environment configuration.

So can a random deb, or npm package, or pip wheel? You’re either ok with executing unverified code or not - piping wget into bash doesn’t change that

Maybe they can with postinstall scripts, but they usually don't.

For the most part, installing packaged software simply extracts an archive to the filesystem, and you can uninstall using the standard method (apt remove, uv tool remove, ...).

Scripts are way less standardized. In this case it's not an argument about security, but about convenience and not messing up your system.


I'm so thankful for nixos for making it hard for me to give in to that temptation. you always think "oh just this once". but with nixos I either have to do it right or not bother.

NixOS gives you a place to configure things in a reproducible way, but it doesn’t require you do it.

It sort of does actually, at least if you don't have nix-ld enabled. A lot of programs simply won't start if they're not static-linked, and so a lot of the time if you download a third-party script, or try to install it when the `curl somesite.blah | sh`, it actually will not work. Moreover, it also is likely that it won't be properly linked in your path unless you do it thr right way.

$ ./Downloads/tmp/xpack-riscv-none-elf-gcc-15.2.0-1/bin/riscv-none-elf-cpp Could not start dynamically linked executable: ./Downloads/tmp/xpack-riscv-none-elf-gcc-15.2.0-1/bin/riscv-none-elf-cpp NixOS cannot run dynamically linked executables intended for generic linux environments out of the box. For more information, see: https://nix.dev/permalink/stub-ld

You have to go out of your way to make something like that run in an fhs env. By that point, you've had enough time to think, even with ADHD.


Equally I don't like how many instructions and scripts everywhere use shorthands.

Sometimes you see curl -sSLfO. Please, use the long form. It makes life easier for everybody. It makes it easier to verify, and to look up. Finding --silent in curl's docs is easier than reading through every occurrence of -s.

   curl --silent --show-error --location --fail --remote name https://example.com/script.sh
Obligatory xkcd: https://xkcd.com/1168/

> Finding --silent in curl's docs is easier than reading through every occurrence of -s.

Dumb trick: Search prefixed with 2 spaces.

  man curl
  /  -s
Yields exactly one hit on my machine. In the general case, you may have to try one and two spaces.

For a small flight of fancy, imagine if each program had a --for-docs argument, which causes it to simply spit out the canonical long-form version equivalent to whatever else it has been called with.

Or, a separate program that can convert from short to long form:

> for-docs "ls -lrth /mnt/data"

ls -l --reverse -t --human-readable -- /mnt/data

(I'd put in an option to put the options alphabetically too)


While I'd appreciate that facility too, it seems... even-more-fanciful, as one tool would need to somehow incorporate all the logic and quirks of all supported commands, including ones which could be very destructive if anything went wrong.

Kind of like positing a master `dry-run` command as opposed to different commands implementing `--dry-run` arguments.


Aren't there tools for which the short flags are standardized (e.g. POSIX) but the long flags aren't?

agreed. i get if you're great at cli usage or have your own scripts, but if you're publishing for general use, it should be long form. that includes even utility scripts for a small team.

also, putting it out long-form you might catch some things you do out of habit, rather than what's necessary for the job.


Another possible advantage is that I invariably have to check the man page to find the appropriate long-form option and sometimes spot an option that I didn't know about.

Absolutely agree.

The shorthands are for when typing it at a console and the long form versions should be used in scripts.


If you don't trust the software, don't install it.

Trusting software would be foolish. Most software has access to file system and the net. Due to practical reasons, I have no energy or time to verify whether the next update of libsecure came with a trojan or stole my env, neither do you. I just acknowledge this fact, take a risk and install it.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: