Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Maybe the IT departments at the affected orgs take solace in the fact that so many other orgs had issues that the heat is off - but in my opinion this was still a failure of IT itself. There's no reason that update should have been pushed automatically to the entire fleet. If Crowdstrike's software doesn't give you a way to rollout updates on a portion of your network before the entire fleet, it shouldn't be used.


The update bypassed the controls orgs had in place to defer/schedule updates, AFAIK.


I've had trouble nailing down if thats the case from searching around online. And if thats true - thats absolutely on Crowdstrike. And that behavior should disqualify it from being used on critical systems. I imagine this incident will cause a lot of teams to consider just what can happen automatically on their systems.


It’s definitely the case. See Crowdstrike’s preliminary post incident review here: https://www.crowdstrike.com/falcon-content-update-remediatio...

The nature of “content updates” vs a full product update. Though you may be right, perhaps they provide controls for those updates, I’ve never used their software. But doesn’t sound like it.


It's on CrowdStrike, but it's also on IT for even allowing installation of critical software like this that has a bypass at all. Updates shouldn't even be allowed to bypass IT's safe rollout procedures, at least not without IT signing off on it anyway.


[flagged]


You're living in a different reality. I can't fathom how anybody could legitimately make that claim.

Even if you're defining "critical system" as "critical to humans" and not "critical to the business", then sure, you can say "Airlines aren't critical" and for most passengers, yeah, you're probably right. Most industries aren't critical, so businesses being ground to a halt doesn't matter for the consumers.

But 911 systems were affected, and those are certainly critical to humans. If 911 doesn't work, ambulances and fire trucks can't be dispatched, and people die.

EDIT: Computers attached to hospital beds, including trauma surgery rooms, were affected. I'm really curious what you think defines a critical system.


One interesting thing I saw is, per a snippet that claimed to be part of Crowd Strike's ToS, it shouldn't have been installed on any of those machines where human life depended upon it (along with no nuclear facilities and a few other exceptions). Is there going to be any fallout from people installing it on systems the software wasn't designed for? Did Crowd Strike perhaps know it was being installed on these systems but ignored it since they were getting paid and it wasn't them violating the agreement?


if a user does something the manufacturer told them specifically not to do, I have a hard time blaming the manufacturer for it. Within an approved use? absolutely, blame the manufacturer.

but if you shoot yourself in the foot, don't blame the bowyer just because they sold the bow to you.


Supposedly, CrowdStrike sales would pressure companies to have the software installed on every system in their network.


The 911 system itself is critical sure. I never said it wasn't. When the computer systems supporting 911 went down due to crowdstrike, those functions were replaced with available backups, that were planned for situations like this, e.g. using analog phones and taking notes by hand (just like they used to do it).

If the system survives (albeit with diminished capacity) loss of a component, then that component is not critical for the system. That's basically the definition of "critical".

Source: https://www.usatoday.com/story/news/nation/2024/07/19/crowds...


How did 911 services go down then? Whatever system caused that, should be by definition critical, imho.


According to the crowdstrike tos, sure..


If that's the case, that doesn't change GP's point: if Crowdstrike can bypass your org's controls on rolling out updates to its software, it shouldn't be used.


Didn't day say in their incident report that they have a batched rollout strategy for software updates but this was a config update and the update path for configs does not have such a mechanism in place.


Ya, so hopefully it's obvious to them that every rollout needs some kind of batching. I get that all devices within one org might need to have the same config, but in that case batch it out to different orgs over 2-3 days.

Maybe the more critical infrastructure and health care orgs are at the end of that rollout plan so they are at lower risk. It's not ideal if one sandwich shop in Idaho can't run their reports that day, but that's far better than shutting down the hospital next door. CrowdStrike could even compensate those one system shops that are on the front line when something goes down.

Again, better to pay a sandwich shop a few thousand dollars for their lost day of sales than get sued by the people in the hospital who couldn't get their meds, x-rays, etc in time.


Generally none gates content updates as they happen multiple times a day


Management decides to use Crowdstrike, not IT, and IT has no way to rollout updates in controlled fashion.

So not really a failure of IT, at least not for this reason.


In big companies, it's the Management of the IT team.

I know, not really the DailyWTF materials that majority HNers led to believe.


My comment assumes that the IT department (including its executive) gets to make these sort decisions - why wouldn't they?


In many mature orgs, corporate IT rolls up to the CIO and security will roll up to the CISO

The CISO and security ops will demand to be completely independent from corp IT, for legit reasons, as the security team needs to treat IT as potential insider threat actors with elevated privileges.

They will also demand the ability to push out updates everywhere at any time in response to real-time threats, and per the previous point they will not coordinate or even announce these changes with IT.

There has always been an implicit conflict between security and usability, because of the inherent nature of security deny policies, but they also inherently conflict with conservative change management policies such as IT slow rolling changes through lower environments on fixed schedules and operating with transparency


> The CISO and security ops will demand to be completely independent from corp IT, for legit reasons, as the security team needs to treat IT as potential insider threat actors with elevated privileges.

I always wondered: why should security ops not be a potential insider thread actor? In fact, if they were compromised, it would be even worse.

Do we need two different security ops that monitor each other? :)


In most clustered systems, you need at least 3 observers, so that its a clear majority of systems can decide that the observer is not working as expected.

So I guess 5 security OPS teams in different regions of the world, and they can all call a vote if one of the teams is now 'bad' :)


Generally, act vs monitor is the segregation of duties that I have seen best working between platform or IT ops and engineering (act) vs security ops (monitor).

For many high privilege operations there are more segregation of duties in the act side of things - these can be down to plan, authorise, configure, activate, validate or some rollups of these. Another is dual control on the act side, since conspiracy is generally quite hard to do especially if it’s just for pocket-change. Different if it’s $$Billions of fungible cash of course at stake.

People often overcomplicate - simple do/check is often enough.


Isn’t that why some organizations have a red team and a blue team?


Security should not, in general, have anything but awooga-awooga red lights and sirens break-glass write/change/delete/shutdown access to prod infrastructure or systems, or indeed anything that could compromise them. I’d argue that access to read or copy sensitive data is almost, but not quite as dangerous without extensive controls and internal monitoring too…

IMO there are no legit reasons except politics, empire building, NIH and toxic relationships for such a such a crazy state of affairs.


Major purchases tend to be pushed up the ladder. It’s not uncommon for a CEO or non technical director etc to decide what IT systems to use.


In my experience the decisions on any non trivial IT system rollout are made by entirely unqualified, non technical execs who are usually swayed by marketing such as clownstrikes Super Bowl advertisement.

Technical people will make a recommendation, knowing it’s going to be ignored and that the decisions already been made.


IT doesn't steer the ship in banks (and bank-like orgs). IT gets a mandate from the real decision makers that they have to choose something that does x, y, z - see "Regulations which strongly suggest particular software purchases" in the article for examples of x, y, z.

So sure, IT gets to "decide" - between CrowdStrike, SentinalOne, or Palo Alto (and maybe a couple others). But they don't really have much choice, they can't use an OSS solution, or roll their own, or anything else. They have to pick one of a small number of existing solutions.


Sure, if there's a security exec who made the decision, it may be their fault. I was thinking more of the rank and file, but that's just my bias.


When a ransomeware attack is happening, organizations will engage to cybersecurity vendors and will start PoC with a bunch of them and list the pros and cons of each vendors before they negotiate and ended up selecting the winner.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: