Hacker News new | past | comments | ask | show | jobs | submit | oftenwrong's comments login

Regarding priorities, a manager from my past used a system that I liked: Each person had a set of tickets assigned to them, but there was always an unambiguous priority order. At any given time, a given worker would work on their highest-priority ticket that progress could be made on at that time. If anyone wanted to shift that worker to another ticket, then the priorities needed to be adjusted, and the worker would be notified (if they didn't do it themselves). This helped make it clear what people were expected to be working on, and made it easy to see what people were currently working on.

I am not an expert in SAML, but my understanding is that the cert is typically included in the SP metadata. It seems to me that icluding the SP cert in the AuthnRequest would defeat the purpose of signing the request. Is that supported in the standard?

For the house, hire a professional inspector who will use an x-ray fluoresence meter and dust test strips. You want a professional because they will be more thorough, and check things that you would not know to check.

To test your children's exposure, you can have their blood tested. They may very well be exposed from sources other than your house


This appears similar to Java's String Template preview feature that was withdrawn:

https://openjdk.org/jeps/465

This is probably the best overview of why it was withdrawn:

https://mail.openjdk.org/pipermail/amber-spec-experts/2024-A...


I used to use uMatrix and would often disable CSS, but enable necessary JS. This allowed most sites to work properly while displaying them in a plain HTML look that I prefer. I think what you are describing is aimed at a similar end result, but would require less faff.


Basically, yes. Also, disabling some CSS breaks sites whose usability depends on layout. This tries to preserve layout.


I've had the misfortune of working on "enterprise" software where avoidance of "hard-coding" went so far that the core system did almost nothing other than exist as a way to plug in mutable functionality. All in the name of not having to re-build, or to re-deploy, and being able to tell the customer that no code changes would be required, even though the "configuration" was practically code, and required many of the things that code might require, like qualified configuration developers, test suites, and detailed deployment plans. Truly madness. In the era of software as a service via internet, software is more "soft" than it has ever been. You can change it and deploy it to production any time you please.


This is a typical use case for views and stored procedures in RDBMSs. They can be used to provide a stable API for a given client even as the underlying tables change. To your point, however, these still do not solve the problem.


Alternatively, we could use a model in which functions are immutable, and therefore make such breaking changes impossible. This is the approach taken by Unison:

https://www.unison-lang.org/docs/the-big-idea/


I've read about Unison a lot, and while "remote execution" idea sounds extremely cool, all of other properties seem very dubious. For example re "no breaking changes":

- Imagine that in my project, I have a "FOO" function, which is called by many others

- I've decide to change type of one parameter of FOO function. This would be a breaking change in regular language, but in Unison, nothing breaks - I push the new definition, but every caller is still using old version.

- New callers come up, and they use new version. So far so good.

- Some times later, I've discovered a critical business-logic bug in FOO function! So I fix it, and I have to update all the callers to use the latest version... except for half of them I can not, because the parameter types do not match. Seems like I cannot ship the fix to the customer until I spend a bunch of time rewriting the existing code to accommodate argument type change.

As long as there are functions, there are always some kinds changes to them that require one to fix up the callers. How this is enforced can be different - in strictly typed languages, code may fail to compile; in dynamic languages, you may see runtime failures; and in Unison, things will work until you try to edit the caller, at which case it'll fail to compile (unison docs call that operation, converting from text to internal language's representation, "typecheck").

I am not convinced that the "postpone failure until you edit the caller" is the best approach here. When I refactor, I normally want to see any problems surface right away, while I still have the context for the change.


This also has issues with security fixes, unfortunately.


How so? You can always deprecate the old functions and encourage people to use the new ones, it just doesn't automatically force everyone to do so immediately by breaking their compilation.


but that's the crux of the issue: some people think that if software is insecure, it is better for it to be actually unuseably broken. see for instance how many valid usecases of software that "spies" on global key input are broken by X11 -> wayland for the sake of better security: stuff like autohotkey, global recording in apps like OBS Studio, global key displays, yaquake-style terminals..


Why is it a good model to allow some software engineer "at a distance" to enforce whether some downstream user must drop every other priority and upgrade?

I agree that attitudes towards security are generally very poor, but breaking working infrastructure sounds like a crazy practice. Like any sensible system, a good/robust design should allow staged upgrades / hot reloading for anything but a very tiny core of critical functionality. Erlang/BEAM is a great example; it just requires software engineering to adopt a different mindset.


> Why is it a good model to allow some software engineer "at a distance" to enforce whether some downstream user must drop every other priority and upgrade?

It's not a good model, and that's why this only forced in commercial software or in particularly obnoxious projects, like earlier versions of Ubuntu Snap. Every other case is user's choice - package managers have lock files; automated updates can be disabled; docker images can be referenced by SHA; etc...

That's not to say that infrastructure does not break - there plenty of horrible setups out there... but if you discover you "must drop every other priority and upgrade", then maybe spend some time making your infra more stable? Commit that lockfile (or start saving dev docker containers, if you can't), stop auto-deploying latest changes and make sure you keep previous builds around, instead of blaming software ecosystem and upstream authors.


> I agree that attitudes towards security are generally very poor, but breaking working infrastructure sounds like a crazy practice.

Yes, breaking infrastructure is bad. But letting already broken infrastructure continue can be worse.

The point is that we want a better way to detect when breaking changes happen so that security fixes can be applied without breaking anything, while permitting optional upgrades on our own schedule for other features. There doesn't seem to be a great solution yet, it's either "it never breaks but you're possibly vulnerable to security issues that can't be easily patched", or "things can break at any time due to updates so we have to manually verify this doesn't happen".


Yeah, though 'jcelerier brought up a case where "insecure" behavior is a feature, and the more "secure" design is directly incompatible with it. These cases of breakage can't easily be solved though better coding, and are not random mistakes - there's a fundamental incompatibility that needs to be resolved.


> Yeah, though 'jcelerier brought up a case where "insecure" behavior is a feature, and the more "secure" design is directly incompatible with it

I don't think it's fundamentally incompatible with a secure design though, you just need to reify the authority to do those things so you can explicitly grant them to specific programs as appropriate.


this is how you end up with the living hell that are Android and iOS


That seems a little melodramatic, particularly since your only other options are that every untrusted program can access every authority like capturing every keystroke, or all programs are effectively neutered.


> every untrusted program can access every authority like capturing every keystroke,

The most used desktop OS on the planet has allowed this since forever and the world hasn't ended


"World ending" is not the only valid security metric. Lots of viruses, worms and security vulnerabilities have. This is increasingly untenable as more of people's lives are in data.


I recommend reading The world in which IPv6 was a good design by apenwarr:

https://apenwarr.ca/log/20170810

It has been submitted many times here:

https://hn.algolia.com/?dateRange=all&page=0&prefix=true&que...

You will find at the end that it has received an update of sorts here:

https://tailscale.com/blog/two-internets-both-flakey


Thanks for this. Yes, the key mistake was not tackling mobility before the iPhone.


I was curious how the times were obtained. It uses https://github.com/nathan-osman/go-sunrise , which links to this calculation method: https://en.wikipedia.org/wiki/Sunrise_equation#Complete_calc...


Great link thanks!


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: