Why is it a good model to allow some software engineer "at a distance" to enforce whether some downstream user must drop every other priority and upgrade?
I agree that attitudes towards security are generally very poor, but breaking working infrastructure sounds like a crazy practice. Like any sensible system, a good/robust design should allow staged upgrades / hot reloading for anything but a very tiny core of critical functionality. Erlang/BEAM is a great example; it just requires software engineering to adopt a different mindset.
> Why is it a good model to allow some software engineer "at a distance" to enforce whether some downstream user must drop every other priority and upgrade?
It's not a good model, and that's why this only forced in commercial software or in particularly obnoxious projects, like earlier versions of Ubuntu Snap. Every other case is user's choice - package managers have lock files; automated updates can be disabled; docker images can be referenced by SHA; etc...
That's not to say that infrastructure does not break - there plenty of horrible setups out there... but if you discover you "must drop every other priority and upgrade", then maybe spend some time making your infra more stable? Commit that lockfile (or start saving dev docker containers, if you can't), stop auto-deploying latest changes and make sure you keep previous builds around, instead of blaming software ecosystem and upstream authors.
> I agree that attitudes towards security are generally very poor, but breaking working infrastructure sounds like a crazy practice.
Yes, breaking infrastructure is bad. But letting already broken infrastructure continue can be worse.
The point is that we want a better way to detect when breaking changes happen so that security fixes can be applied without breaking anything, while permitting optional upgrades on our own schedule for other features. There doesn't seem to be a great solution yet, it's either "it never breaks but you're possibly vulnerable to security issues that can't be easily patched", or "things can break at any time due to updates so we have to manually verify this doesn't happen".
Yeah, though 'jcelerier brought up a case where "insecure" behavior is a feature, and the more "secure" design is directly incompatible with it. These cases of breakage can't easily be solved though better coding, and are not random mistakes - there's a fundamental incompatibility that needs to be resolved.
> Yeah, though 'jcelerier brought up a case where "insecure" behavior is a feature, and the more "secure" design is directly incompatible with it
I don't think it's fundamentally incompatible with a secure design though, you just need to reify the authority to do those things so you can explicitly grant them to specific programs as appropriate.
That seems a little melodramatic, particularly since your only other options are that every untrusted program can access every authority like capturing every keystroke, or all programs are effectively neutered.
"World ending" is not the only valid security metric. Lots of viruses, worms and security vulnerabilities have. This is increasingly untenable as more of people's lives are in data.
I agree that attitudes towards security are generally very poor, but breaking working infrastructure sounds like a crazy practice. Like any sensible system, a good/robust design should allow staged upgrades / hot reloading for anything but a very tiny core of critical functionality. Erlang/BEAM is a great example; it just requires software engineering to adopt a different mindset.