Maybe there just not enough interest? After all there is good public transportation (especially rail), increasing biking habits and just loving the driving experience.
I just opened the Password app for the first time to look at the generator. It seems like the pattern is: [a-zA-Z0-9]{6}\-[a-zA-Z0-9]{6}\-[a-zA-Z0-9]{6} with exactly only one uppercase char and one digit. I don't want to do the maths but that looks like a lot of removed entropy.
Fully random: 62^18 in that format, or about 107 bits of entropy.
Their approach: ~71 bits per the article (I counted ~73 bits but I’m not using their exact algorithm)
I’d say it’s not too bad. With a good password hashing algorithm you’re looking at nearly 2^100 operations to bruteforce their passwords, which isn’t going to be feasible anytime soon. (Even with a crappy hash algorithm it’s still going to be over 2^80 operations).
And, in this case, that entropy trade off means the passwords are easier to remember and type in, making it more likely for humans to actually use those passwords.
Nit, 160 bits of entropy would be if you could get 8 bits per character, but that’s highly unrealistic. 6.0~6.3 bits is more feasible based on what most websites will tend to accept, which lands you at around 120-126 bits of entropy for a fully random password.
Reasons are actually stated, they explained it is not feasible for the general case.
Once the devs have made clear they don’t plan to do exceptions (as other softwares do) then users should accept it and move on instead of keep harassing volunteers.
... except VLC implements time-based seeking which has pretty much the same requirements and is consequently not possible with literally all files either. But both are possible with 99.99% of video files you will come accross.
VLC developers are of course free to reject any feature request but if they do it by bullshitting their users (and that includes tacking on additional requirements that no user actually needs like perfect support for all formats under the sun) then they will be rightfully called out for it. Then throwing a tantrum and citing CoC violations is not going to improve things.
It's their project so ultimately they get to choose to run it into the ground but this kind of behavior is not something I want to support as a user os I will stay away from VLC which includes not making helpful bug reports and not donating.
Not on production critical systems where there are human lives at stake. Last Friday is a pretty good example of what comes together with ungoverned ‘autoupdate’.
So let's imagine that it has to be updated manually. New threat appears and since it takes a while to manually update it means bad actors can act on it meanwhile, causing a similar or even worse disruption since it could have far more severe impact, because of the bad intents.
"Immediate across the fleet" and "Entirely manual process" are not the only two options. HN rules say we must assume good faith, but there are obviously options in between, and all of them stop the issue that happened on Friday.
Your argument is the 0.01% of cases should dictate the other 99.99%s actions?
I would pick automated testing and spread fleet deploys. There's no reason in any enterprise this should take more than 1-2 hours, which is a perfectly acceptable window of risk.
I'm not fully sure what you mean by 0.01% cases? Where did you get those percentages?
Businesses are under a constant barrage of cyber attacks, with goals to steal the data, encrypt it and then blackmail or sell all the data. Ransomware payouts exceeded $1 bil last year. And that doesn't include all the damage done besides the payouts.
Edit: Supposedly global cost of cybercrime is expected to reach $20 trillion+ by 2027.
How often do you think RCE vulerabilities are dropping on enterprise machines that already have vectors for security (firewalls, password policy, software install policy, etc)?
I understand cybercrime is real, however I highly doubt the amount of real time RCE exploits leaked into the wild executed within 2 hours is > 0.01% of the updates pushed by CrowdStrike.
This would require a deep dive into analyzing the importance of that specific update and all the other updates they do and at which frequencies and for which reasons. 2 leading causes for ransomware are social engineering and unpatched software which something like CrowdStrike should be able to secure against.
If there's a new pattern of social engineering/phishing attack it might be a question of hours to be able to respond to that and identify those specific patterns. Or just every minute will mean that more companies and machines will be compromised if there's a mass phishing campaign going on.
If you need to have automatic updates then you need to apply risk analyses of what would happen if that system fails.
A typical solution would be to have two machines, one with the automatic updates and a second one without automatic updates that jumps in in case the first one breaks down.
>A typical solution would be to have two machines, one with the automatic updates and a second one without automatic updates that jumps in in case the first one breaks down.
Great, now the other one is still vulnerable and hackers can still steal information from it.
The proper solution is a hardened machine build for critical systems that doesn't have internet access, disabled USB, attachments blocked in email, etc.
However that isn't popular and most orgs would prefer a day of downtime from this type of outage vs the hassle and cost of doing it right.
If it’s all about thinking then being restricted by the vocabulary of your language[s] might be a limitation. As a bilingual, a common question from friends in primary school was what language I was thinking in. My answer was I don’t think words, I think images. I later read Edward de Bono Lateral Thinking. I might be out of context here but I thought someone might be interested in the book.