Maybe the IT departments at the affected orgs take solace in the fact that so many other orgs had issues that the heat is off - but in my opinion this was still a failure of IT itself. There's no reason that update should have been pushed automatically to the entire fleet. If Crowdstrike's software doesn't give you a way to rollout updates on a portion of your network before the entire fleet, it shouldn't be used.
I've had trouble nailing down if thats the case from searching around online. And if thats true - thats absolutely on Crowdstrike. And that behavior should disqualify it from being used on critical systems. I imagine this incident will cause a lot of teams to consider just what can happen automatically on their systems.
The nature of “content updates” vs a full product update. Though you may be right, perhaps they provide controls for those updates, I’ve never used their software. But doesn’t sound like it.
It's on CrowdStrike, but it's also on IT for even allowing installation of critical software like this that has a bypass at all. Updates shouldn't even be allowed to bypass IT's safe rollout procedures, at least not without IT signing off on it anyway.
You're living in a different reality. I can't fathom how anybody could legitimately make that claim.
Even if you're defining "critical system" as "critical to humans" and not "critical to the business", then sure, you can say "Airlines aren't critical" and for most passengers, yeah, you're probably right. Most industries aren't critical, so businesses being ground to a halt doesn't matter for the consumers.
But 911 systems were affected, and those are certainly critical to humans. If 911 doesn't work, ambulances and fire trucks can't be dispatched, and people die.
EDIT: Computers attached to hospital beds, including trauma surgery rooms, were affected. I'm really curious what you think defines a critical system.
One interesting thing I saw is, per a snippet that claimed to be part of Crowd Strike's ToS, it shouldn't have been installed on any of those machines where human life depended upon it (along with no nuclear facilities and a few other exceptions). Is there going to be any fallout from people installing it on systems the software wasn't designed for? Did Crowd Strike perhaps know it was being installed on these systems but ignored it since they were getting paid and it wasn't them violating the agreement?
if a user does something the manufacturer told them specifically not to do, I have a hard time blaming the manufacturer for it. Within an approved use? absolutely, blame the manufacturer.
but if you shoot yourself in the foot, don't blame the bowyer just because they sold the bow to you.
The 911 system itself is critical sure. I never said it wasn't. When the computer systems supporting 911 went down due to crowdstrike, those functions were replaced with available backups, that were planned for situations like this, e.g. using analog phones and taking notes by hand (just like they used to do it).
If the system survives (albeit with diminished capacity) loss of a component, then that component is not critical for the system. That's basically the definition of "critical".
If that's the case, that doesn't change GP's point: if Crowdstrike can bypass your org's controls on rolling out updates to its software, it shouldn't be used.
Didn't day say in their incident report that they have a batched rollout strategy for software updates but this was a config update and the update path for configs does not have such a mechanism in place.
Ya, so hopefully it's obvious to them that every rollout needs some kind of batching. I get that all devices within one org might need to have the same config, but in that case batch it out to different orgs over 2-3 days.
Maybe the more critical infrastructure and health care orgs are at the end of that rollout plan so they are at lower risk. It's not ideal if one sandwich shop in Idaho can't run their reports that day, but that's far better than shutting down the hospital next door. CrowdStrike could even compensate those one system shops that are on the front line when something goes down.
Again, better to pay a sandwich shop a few thousand dollars for their lost day of sales than get sued by the people in the hospital who couldn't get their meds, x-rays, etc in time.
In many mature orgs, corporate IT rolls up to the CIO and security will roll up to the CISO
The CISO and security ops will demand to be completely independent from corp IT, for legit reasons, as the security team needs to treat IT as potential insider threat actors with elevated privileges.
They will also demand the ability to push out updates everywhere at any time in response to real-time threats, and per the previous point they will not coordinate or even announce these changes with IT.
There has always been an implicit conflict between security and usability, because of the inherent nature of security deny policies, but they also inherently conflict with conservative change management policies such as IT slow rolling changes through lower environments on fixed schedules and operating with transparency
> The CISO and security ops will demand to be completely independent from corp IT, for legit reasons, as the security team needs to treat IT as potential insider threat actors with elevated privileges.
I always wondered: why should security ops not be a potential insider thread actor? In fact, if they were compromised, it would be even worse.
Do we need two different security ops that monitor each other? :)
In most clustered systems, you need at least 3 observers, so that its a clear majority of systems can decide that the observer is not working as expected.
So I guess 5 security OPS teams in different regions of the world, and they can all call a vote if one of the teams is now 'bad' :)
Generally, act vs monitor is the segregation of duties that I have seen best working between platform or IT ops and engineering (act) vs security ops (monitor).
For many high privilege operations there are more segregation of duties in the act side of things - these can be down to plan, authorise, configure, activate, validate or some rollups of these. Another is dual control on the act side, since conspiracy is generally quite hard to do especially if it’s just for pocket-change. Different if it’s $$Billions of fungible cash of course at stake.
People often overcomplicate - simple do/check is often enough.
Security should not, in general, have anything but awooga-awooga red lights and sirens break-glass write/change/delete/shutdown access to prod infrastructure or systems, or indeed anything that could compromise them. I’d argue that access to read or copy sensitive data is almost, but not quite as dangerous without extensive controls and internal monitoring too…
IMO there are no legit reasons except politics, empire building, NIH and toxic relationships for such a such a crazy state of affairs.
In my experience the decisions on any non trivial IT system rollout are made by entirely unqualified, non technical execs who are usually swayed by marketing such as clownstrikes Super Bowl advertisement.
Technical people will make a recommendation, knowing it’s going to be ignored and that the decisions already been made.
IT doesn't steer the ship in banks (and bank-like orgs). IT gets a mandate from the real decision makers that they have to choose something that does x, y, z - see "Regulations which strongly suggest particular software purchases" in the article for examples of x, y, z.
So sure, IT gets to "decide" - between CrowdStrike, SentinalOne, or Palo Alto (and maybe a couple others). But they don't really have much choice, they can't use an OSS solution, or roll their own, or anything else. They have to pick one of a small number of existing solutions.
When a ransomeware attack is happening, organizations will engage to cybersecurity vendors and will start PoC with a bunch of them and list the pros and cons of each vendors before they negotiate and ended up selecting the winner.
Was anyone else surprised how little disruption they personally experienced? I had braced for impact that weekend. But all my flights were perfectly on time, all my banking worked, providers worked, and sites & resources were available.
I don’t know if I somehow just have little exposure to Windows in my life or if there’s an untold resiliency story for the global internet in the face of such a massive outage.
All I can say is THANK YOU to all the unsung heroes who answered the call and worked their butts off. Infrastructure doesn’t work without you. We see you & we thank you!
I was unaffected on my work laptop. One of my coworkers is a long-timer and said when the company first got laptops there was a huge "OMG leave your laptops on overnight" push to make sure updates were applied. I always at least sleep if not shut-down after work so I guess I missed out
I know at least one person who "survived" while her coworker's laptops were down.
My first question was "do you shut your machine off at the end of the day?" She did, and that's probably why about half of her office was affected, and the other half was not.
IIRC only 5% of Windows machines were affected. So, it is very probable that most people just saw the news but have no real impact on them. Some had minor and maybe memorable impact, like Indian airlines giving handwritten boarding passes.
Crowdstrike took out less than 1% of the global Windows installation base.
But they took out a far larger fraction of installation base in regulated industries. The very industries who are tightly regulated because they are supposed to keep the wheels of the society turning.
Supply chain risks are everywhere, and in regulated industries they are highly concentrated.
I wish that were the case for me! My in-laws had their flight out of JFK delayed by 2-3 days, as did my daughter who was supposed to fly as an unaccompanied minor.
I was flying back to the US from Mexico on United the day after the meltdown. Reading the news, I was obviously quite concerned about how it was going to go (I was traveling with my 10 and 6 year old kids). Amazingly, everything went off without a hitch; not even the slightest delay.
I asked the guy at the luggage counter, and he said the day before was pretty crazy, but they had everything straightened out by the next day.
I had a 7am flight on Delta from LGA to MSP. Seeing all of the blue screens in the airport was pretty surreal and our flight was delayed four hours.
But yeah, other than that, the only issue we ran into was that the Jimmy John’s we stopped at for lunch outside of MSP was slammed because Delta had ordered hundreds of sandwiches for their staff.
I’ve definitely experienced much worse travel disruptions due to normal weather (though obviously we got real lucky compared to some Delta customers).
This is a good writeup, but to be fair it's just not a matter of banking regulations. Basically all big companies are under similar obligations regarding endpoint protection.
Should endpoint protection require kernel level access? At what point does it stop becoming protection and start becoming a liability? Obligatory who watches/protects the watchmen/protector...
Yes, it needs kernel access given the userspace api's available in windows. Period. not a single person who knows how the tool works and the threats it protects against has said other wise. userpace can't disable or tamper kernel space but an admin/root process in userspace can.
It isn't a status quo, it is the design of the windows operating system, as well as Linux. Macos does it's own thing but it is somewhat effective because you need to go into recovery before you can disable sysexts as root. Imagine needing to go to windows recovery environment to disable drivers, that won't fly. Apple can do that because they control the hardware and software, you rarely need to mess with sysexts as part of troubleshooting as a result.
Unlike normal software development, anti-malware software has to be resilient against all kinds of tampering. The price for having an os that isn't heavily locked down and tamper resistant due to hardware enabled checks is having to rely on kernel mode code to enforce tamper resistance. Evasion is another issue, you can already hook api calls from user space (some EDRs do this) but evading it as a privileged user is trivial. It boils down to how on x86/x64 the cpu enforces 3 major privilege rings, by design things that are integrated with the OS that require OS level privileges and system wide access must run in the same ring as the OS (ring0/kernel mode).
There are many ways to tackle this but I haven't heard of any (even from Microsoft's blogs/proposals after the incident) that won't reduce the capabilities and tamper/evasion resiliency of these security softwares. if x64 had a "secure world" concept like ARM for example, that would be different but it doesn't.
With the current model kernel level access is required. Real security products have to be able to operate above userland. Ideally in the future there can be a layer in between userland and kernel for this sort of thing. Maybe we use some of those extra protection rings?
You could, and in fact this is what Microsoft wanted to do. The EU said that they couldn't.
And the reason why not is simple. Anything that Microsoft thinks is a good thing to add to the API, they'll add for themselves. When the new API is released, their software is released with it. This gives them a competitive advantage over competitors who have to wait for Microsoft to have the idea that they want, and then scramble to implement it after Microsoft does.
The EU is suspicious of this for the simple reason that Microsoft has a several decade history of doing exactly that. Repeatedly. My favorite example being the release of Windows 95 with Microsoft Word available at the same time, and with WordPerfect unable to run. By the time WordPerfect had figured out how to port their software to Windows 95, they were no longer the market leader.
> Windows 95 with Microsoft Word available at the same time, and with WordPerfect unable to run
That is somewhat revisionist history. WordPerfect admitted at the time they saw OS/2 as the future and were focused on that. Only in hindsight did they realize OS/2 was going nowhere (too bad, it was better than 95) and had to rush to get a WordPerfect for 95. Worse for them, they wrote each release of WordPefect in platform specific code (mostly assembly) so it wasn't a case of port to 95 it was a case of start over mostly from scratch.
Yes WordPerfect lost to Word with 95 - but it was bad decisions on WordPerfect's part. They had opportunity to get WordPerfect on 95 much faster. I don't know if it would have been fast enough, but they didn't even try until it was too late.
The use of platform specific code was a performance necessity at the time, everyone did it. Part of the promise of Windows 95 was that it could run your Windows 3.1 programs. They bent over backwards for a ton of programs, but not WordPerfect. Microsoft also had an early access program to Windows 95. WordPerfect applied for it - and was denied access. After that the OS/2 bet was their only real hope.
The truth is that Microsoft had a long and documented history of using one monopoly to leverage into another. Over and over again they lost antitrust lawsuits, but internally regarded them as speeding tickets on the way to greater monopoly power. This history showed up in court. The internal documentation on the WordPerfect case showed up in the Netscape case, and is part of why Mocrosoft won.
It wasn't until the EU started charging Microsoft over $400 million per day for noncompliance in 2006 that Microsoft's attitude started to change. Now I see them as just normal big guys with a worse than average history. But back in the 90s and early 2000s? They EARNED the title of "evil empire".
There is another point to consider here. The state of anti-virus solutions before Microsoft released Defender was horrible (probably still is).
It was full of ad infested solutions, which would crash your computer from time to time.
Defender at least was reasonably performant and tended to be stable.
You could say that since they had access to kernel source, they were better informed, but I guess if there was an API, the provided documentation would solve the issue (not necessarily, not everyone bothers to read the docs).
But then you get back on how to enforce equal and open access for everyone (the EU did try to make Microsoft open the Word file format, but turned out it was so complicated and documented in legacy code only, that Micorsoft had trouble giving useful docs)
Yes. Defender was legitimately better than the alternatives. In fact no AV at all was better - which is something that I learned from Google's Project Zero.
This is why tech conglomerates are anti-competitive and need to be broken up. There is no reason a leading operating system company should be allowed to also be a word processing, video conferencing, and music-selling company. They will leverage their control of the operating system business to gain unearned competitive advantage in the unrelated markets.
> There is no reason a leading operating system company should be allowed to also be a word processing, video conferencing, and music-selling company.
If I write a new OS how will you force the "word processing, video conferencing, and music-selling" companies to write code for it? If they don't write the above my OS is worthless, but if my OS fails in the market anyway they just wasted a lot of money. This is why OS companies tend to have the other things, their OS cannot exist in a vacuum and the only way to ensure they have those needed tools is to write them themselves.
That only works if you are big enough. If you are BeOS trying to get your new better OS going you don't have the power to make any deals. For that matter Microsoft wasn't big enough, WordPerfect was going after IBM's OS/2.
The case brought to light an Oct. 3, 1994 memo from then-Microsoft CEO Bill Gates, who indicated that Microsoft should withhold namespace extension APIs in Windows 95 from its competitors, WordPerfect and IBM, in order to gain market advantage for Microsoft Word.
In other words, your revisionist history is wrong. Microsoft really was big enough. We know that because WordPerfect asked for early access to Windows 95. It was Microsoft who turned them down. (And no, I don't believe Gate's testimony about security. I think that Gates was bamboozling the judge, and the judge bought it.)
(I had misremembered which court case brought that memo to light. But regardless, it was obvious to the whole industry at the time. Incidentally this memo came while Microsoft was under a consent decree signed on July 25, 1994 with the Justice Department to not try to maintain their monopoly by tying specific products to Windows. Technically, they didn't here, but they were walking the line. They crossed the line with IE though, and that later resulted in the Netscape loss.)
As for BeOS, the question was how a LEADING operating system company was supposed to cope with getting software for the next version of their OS. No matter how many good things we can say about BeOS, they never got to the point of being a leading operating system company.
The way I see it, Microsoft sells some antivirus software, and also gets to decide who is allowed or not to compete with their antivirus software, by providing or denying access to the API. Obviously unfair.
I think anti-virus should be part of the core os. This does kill all third party vendors - good riddance to most of them, sorry if there is one that isn't evil (I'm not aware of it)
Once the AV vendors exist, killing them, especially by Microsoft, is clearly anticompetitive.
If you could prevail on a government to decide that, maybe it could work.
One thing I see, is that AV has a component of maintaining a DB of signatures of bad things. This does not seem at all the job of the core os. Would the Debian team maintain such a DB?
It happens all the time that the big companies take something in house and kill a market. The car radio market is all but dead now that manufactures ship decent radios.
Interesting! I guess there's no way to fix this with further regulation either, since it would be some work to prove MS had access to the API contracts before they released them.
The ultimate lesson then is to stop using MS stuff.
I think it's kind of ridiculous to then blame the regulators for the fact that Microsoft decided not to go ahead with a more competitor-friendly design.
The fact that Microsoft abandoned it as soon as a regulator pointed out how anti-competitive the design of the API was makes you wonder what Microsoft's true intention was. To me that implies the anti-competitive design was its main feature and to Microsoft it would've been pointless to continue without it.
Maybe. Not working at MS I can't say what their reasons were.
But another way of looking at this would be that perhaps they wanted to be the beta testers of the API themselves because opening it up would have been a maintenance liability for the company. Microsoft tends to be pretty good about backwards compatibility in ways that Apple is not.
We also don't know that these APIs were cancelled, they may make it into future versions of windows.
Indeed it's executed via a Jit on something like a VM. However it can still, make your system quite disfunctional if, e.g., all filesystem or network calls are blocked.
The version of the CrowdStrike sensor that caused kernel panics on RHEL/Rocky was using eBPF. It living in eBPF doesn't mean it can't cause system instability.
And as mentioned elsewhere, an eBPF module behaving badly but in valid ways can still make your system pretty unusable.
Unless the OS is locked down to the point that even its owner cannot do that. Actually, this is something I like about Operational Technology, you run into a lot of doodads where the elevation process requires turning a physical key, and the device's main functionality is disabled while it is in service mode. Ofc the doodad has to be engineered to operate reliably, perpetually, for years, and you cant really expect that from a desktop computer.
I have said for 20 years now that Microsoft Word should have a check on startup, if the current user is administrator it should put up a message that administrators are not allowed to use a Word Process, login as someone else. This one change would solve a lot of problems.
Even on home machines where no user has a password, having to do something special to get into administrator mode will stop several attacks just because people will slow down and ask.
That's pretty much what Microsoft tried with the UAC prompts, and that was fairly universally disliked. Not that I disagree with you, running as admin by default is a terrible practice, but it's a tough sell to the general public
Administrators can and should be able to do anything and everything, that is literally an administrator's job description.
Also, if you want to stop everyone from using administrator accounts, the simplest way is to not have the Windows installer/OOBE setup make an administrator account first.
Windows has a built-in Administrator account already not unlike Root in Linux, there is no reason (other than tradition and absolute convenience) the Windows installer/OOBE setup needs to make an administrator account for the user installing/setting up.
Would that actually have a positive effect? Running malicious software in the only user's context can already cause maximum damage: https://xkcd.com/1200/
This would just result in more UAC prompts and thus annoyed users who get taught to click on "Allow" whenever a dialog pops up.
> At what point does it stop becoming protection and start becoming a liability?
If such outages were more frequent, then it could definitely become a liability. But such risks have to be balanced against the risk of being compromised and leaking customer data and other confidential trade secrets, and the risk posed by the latter one is far higher, not to say it's also more common.
How else would you monitor a windows box? EU won't allow Microsoft to lock down their kernel and provide MacOS type solution with APIs for trust publishers.
Basically all B2B companies are under some sort of obligation to have endpoint protection.
All of these requirements essentially become transitive across a company's entire supply chain.
* Big bank needs to comply with X, so do all of their vendors.
* Vendor wants to sell to big bank, so they comply with X. They also need all of their vendors to comply with X.
* So on and so on.
----
Ultimately, there are a lot more options than CrowdStrike, but this is a case of "Nobody gets fired for buying IBM". Even if CrowdStrike isn't the "best", it's good enough. Because it's use is sooo widespread, an issue with it often affects dozens and dozens of other companies when you're affected. One of the great things about this effect is everyone "goes down at the same time", so people don't tend to point fingers at you. In fact, they might not have any clue you're down because some other, more critical system is down internally and preventing them from accessing you.
I remember a similiar situation happening a few years back. A big outage hit large parts of the internet. A pretty major part of our app got taken offline with this outage. This was a known risk and something that we accepted. We expected some backlash and inquires if this situation should ever happen. It was a calculated risk to dedicate more effort towards building customer-facing value.
I think we got one inquiry. It was basically just an FYI. This person had so many things broken on their end that "one more thing" being broken was just a drop in the bucket.
Yes, this is a good summary of the situation. As a matter of fact, I guess there were quite a lot of systems and services that went down even though they were not using Crowdstrike themselves, but some part of their cloud supply chain was. I see Salesforce and Adobe were impacted in some way, probably due to the collateral Azure disruption.
On the other hand, count me surprised at the sales prowess of Crowdstrike, I did not know how big they were.
Any explanation that doesn't boil this down to "software required by corporate policy checklist not written by technical team" is almost certainly missing something here. This is almost definitionally policy capture by a security team and the all too common consequences that attach.
The section that goes over why this wasn't federally pushed is largely accurate, mind. Not all capture is at the federal level. Is why you can get frustrated with customer support for asking you a checklist of unrelated questions to the problem you have called in.
And the super frustrating thing is that these checklists are often very effective for why they exist.
This would be the third incident I'm familiar with of a file of entirely zeroes breaking something big.
Folks, as much as we wish it weren't true, null comes up all the damn time, and if you don't have tests trying to force-feed null into your system in novel and exciting ways, production will demonstrate them for you.
Never assume 'zero' (for whatever form zero takes in context) can't be an input.
> This created a minor emergency for me, because it was an other-than-minor emergency for some contractors I was working with.
> Many contractors are small businesses. Many small businesses are very thinly capitalized. Many employees of small businesses are extremely dependent on receiving compensation exactly on payday and not after it. And so, while many people in Chicago were basically unaffected on that Friday because their money kept working (on mobile apps, via Venmo/Cash App, via credit cards, etc), cash-dependent people got an enormous wrench thrown into their plans.
I never really thought about not having to worry about cashflow problems as a privilege before, but it makes sense, considering having access to the banking system to begin with is a privilege. I remember my bank's website and app were offline, but card processing was unaffected - you could still swipe your cards at retailers. For me, the disruption was a minor annoyance since I couldn't check my balance, but I imagine many people were probably panicking about making rent and buying groceries while everything was playing out.
The really admirable thing about this is that Patrick acknowledged that it was "an other-than-minor emergency" for the contractors and took steps to ensure that they were paid rapidly. In a similar situation many people would have shrugged and taken an attitude of "sorry, bank's down. I'll pay you when it comes back up."
The EU DORA regulation (Digital Operational Resilience Act for Financial Entities) has explicit provisions to avoid concentration risks. I heard a story that a bank was forced to use Google Cloud, because two other banks were already on AWS and Azure.
Did it really hit banks hard? Core banking systems don't run windows, they run on mainframes typically on IBM z/OS. I know it hit the financial firms hard and knocked out their trading systems but I don't know of any major bank losing their core bank system due to crowdstrike.
Australia got hit hard because they modernized their bank systems and now most are cloud based. I am not aware of any major bank running their core systems on the cloud or on windows.
> Configuration bugs are a disturbingly large portion of engineering decisions which cause outages
I work in medical device software -- the stuff that runs on machines in hospital labs, ER's or at patient bedside.
The first "ohmigod do we need to recall this?" bug I remember was an innocuous piece of code that was inserted to debug a specific problem, but which was supposed to be disabled in the "non-debug" configuration.
Then somehow, the software update shipped with a change to the configuration file that enabled that code to run. Timing-critical debug code running on a real-time system with a hard deadline is a recipe for disaster.
Thankfully, we got out of that pretty easily before it affected more than a small handful of users, but things could have been a lot worse.
The article specifically mentions US banks and as I personally didn't see any disruption over here - is there (anec)data on how popular CrowdStrike is in the US vs the EU?
Might be question what type of disruption it is. Transfers and web bank is likely to work. Branches offices and ATMs might have issues. So if you try to do anything in person or negotiate anything with workers in bank there could be issues.
I feel like this only impacted the larger banks. I've heard absolutely no explosion noises coming from smaller institutions. The effect of regulations and their enforcement is felt differently across the spectrum.
There is something to be said for a diverse banking industry when it comes to this kind of problem. Also, this event is a powerful argument for keeping the core systems on unusual mainframe architectures. I think building a bank core on windows would be a really bad choice, but some vendors have already done this.
You don't blame your car's manufacturer if it won't start because the monitoring dongle your insurance provider sent you in exchange for a discount drained the car's battery.
I think that's the wrong analogy. A more correct one would be "Should we blame a car company for a broken engine, that was modified after it was sold to you?".
A kernel level driver from a 3rd party is something that you willingly add to the OS, it wasn't there.
Just because windows allow you to do it, doesn't mean you should.
I mean, you can apply some dangerous mods to your car's engine, but you probably shouldn't, and if you do, it's your responsibility, not the car company.
If you had a support contract with Microsoft for your Windows installs and CrowdStrike is breaking your system they'll tell you to go talk to CrowdStrike, yes.
Ok I didn't realize that crowdstrike was more of competitor or maybe a hacky add-on (like a NOS). I was under the impression that it was something more in cooperation (not owned by or anything) but with Microsoft in terms of market support.
CrowdStrike absolutely is a competitor to Microsoft. Microsoft sells licensed software in the exact same market as CrowdStrike. Microsoft even sells Microsoft Defender for MacOS and Linux. They're direct competitors.
Right, so adding the NOS is making a third party addon that changes the behavior of the product outside the original designs of the product.
And installing a third-party kernel module (driver) is...a third party addon that changes the behavior of the product outside of the original designs of the product?
Honda didn't build the engine with NOS in mind. Microsoft didn't build the NT kernel for CrowdStrike. It is a third-party modification to the system the user chose to add on after taking delivery of the product that ultimately changes the behaviors of the system.
Arguing like Microsoft is liable for CrowdStrike's bad software is like arguing Honda is responsible for that NOS kit.
If I write a buggy kernel module that instantly kernel panics my Linux system, is Linus Torvalds responsible? Or am I responsible for the software I wrote?
The analogy falls apart because Microsoft's platform is meant to integrate with third party software, that's a feature of the system. If the "feature" can take down the system it's a fault of the system.
If you zoom out, Microsoft has a system, a feature allowed on that system, signed by a cert, etc, can take down 8.5million devices of your system, that is a fault of your system.
A counter example of how to architect the thing? MacOS, Linux.
Anyone can make a program that can crash MacOS or Linux especially when you convince the user to install it with very high permissions. It is really not too difficult. Heck, Linux comes with the ability to really mess up your system out of the box. Give it a try:
sudo rm -rf --no-preserve-root /
Gee, why would they possibly ship such malware on their system, something that could break the whole thing just hanging around. Would the distro developers be responsible for the damage caused if you decided to run that command?
If you zoom out, Linux has a system, a feature allowed on that system, signed by a cert, etc, can take down any Linux machine, that is a fault of your system.
> Microsoft's platform is meant to integrate with third party software
Sure, but Microsoft offers no warranty to any of the third-party software. Just like Honda offers no warranty to third party modifications made to your car. Which yes, its normal and fine to use non-OE equipment on your car, but if you swap OE equipment with non-OE equipment they're no longer going to warranty that equipment. It is not like every component of your car is welded together.
Going back to your original comment here, CrowdStrike was not in any way a supplier of parts to Microsoft. This is why Microsoft shouldn't be held responsible in the same way auto makers are liable for the parts by their suppliers. And even then, often with the way auto parts suppliers' contracts are written the final liability just might lay on the parts suppliers! It is not like Honda went under with the Takata airbag recall. Takata was negligent and didn't build to the standards and requirements as their contracts required.
Microsoft isn't going to warranty Chrome having a security issue with their JS sandbox or Photoshop corrupting a file. Neither is Apple if it happens on MacOS.
> For historical reasons, that area where almost everything executes is called “userspace.”
It's an old term at this point, but I don't think the reasons for it being called "userspace" have changed or become outdated since then, so I wouldn't call them historic per se.
Things have gotten messier with virtualization, containerisation, hypervisors etc. The internet loves to produce pedants to argue the post should go into the finer points of these even when it's not relevant to the message. And so people like the author have a defensive reflex to throw in some language to bounce the pedants away.
Why is it called "userspace" when all it runs is some Docker containers hosting a web frontend's server, and no human being ever telnets into it? Where's the "user" in that story?
Where is the "user" when the machine is a Windows box stuffed behind a façade wall that displays airport directions, notifications, and ads on rotate?
The takeaway from this article seems to be: buy crowdstrike shares, because major corps are unable to make any changes, and will continue to pay licensing fees for this "service" for the foreseeable future.
This is going to crush their sales pipeline and lead to at least a few attempting a migration off. Crowdstrike is unlikely to go out of business, but this is not a good time to buy.
Safe Harbor: Don't follow random internet commentators opinions on public markets. This is just an opinion and not advice.
I disagree. Long term, the fundamentals of CRWD continue to remain unabated.
Endpoint protection is still a critical need no matter what - for every bug like CRWD, there's always a company you can point to who's operations were shut down due to an attack.
CRWD skimped on QA and customer support, but long term there aren't many other vendors that can provide a similar service, and CRWD is large enough to pull a PANW and M&A into entirely new segments (eg. DSPM with Flow Security, Observability/Data Lake with Humio, ASPM with Bionic) along with greenfield category makers like Charlotte AI for AI Security and AI EDR.
There will be short term pain for CRWD's Windows endpoint business with churn to MDE, SentinelOne, Tanium, etc but they have enough dry powder and a diversified security portfolio that they can safely recover within a year at most.
> crush their sales pipeline
With CRWD sized companies, most of their revenue comes from multi-year contracts and renewals.
They'll probably have a decently large layoff in the sales org, but enterprise sales tends to be fairly stable due to contract sizes along with riders about liability
That depends what sort of timeline you're looking at. I wouldn't be surprised if the price fell more, but the markets are forward looking and long term they're a key player in the space.
Just spitballing, but I think the lawsuits will take years to come to any conclusion, and in the mean time Crowdstrike will continue to be paid and make a profit. And the conclusion is not really predictable.
I'm still amazed how the blame shifted from Microsoft to CrowdStrike. Yes, CrowdStrike update caused that -- but applications fail all the time. It was Microsoft's oversight to put it on Windows critical path.
And banks/airlines etc were hit hard because their _Windows_ didn't boot, not because of an application crash on a perfectly working Windows.
The application (Crowdstrike) was part of Windows' booting process.
Windows cannot simply "skip" failed drivers. Say Crowdstrike driver failed as a one time thing, Windows skipped it instead of retrying which led to the endpoint being vulnerable and a ransomware happens. We'd be saying the opposite now.
This is a high-impact ability Windows offers to applications - and applications should take responsibility and treat it as such.
I spoke to another EDR lead I know - they said they had provisions in place to read the dump if boot crashed, check if it was due to their driver and skip it if it was (and then send telemetry after startup so that it can be fixed, probably). Crowdstrike should have done the same.
One more thing to note is that we cannot say Windows shouldn't provide this ability - that becomes an anti-trust monopoly, because MS themselves are a competitor in this space.
The difference is that if windows does the skipping then you probably don't find out until its too late, if the application does the skipping there is the opportunity to set up alerting so you can fix whatever went wrong.
Do you mean that the skip would be manually approved after telemetry is sent and folks on-call paged? Then that sounds like it could be viable and a good idea yes.
But always a chance that the skipping mechanism could break as well. And there must be some form of networking available to able to send that and ask for approval.
Exactly! On skipping mechanism breaking - I mean, anything could break. Boils down to design and testing like all things.
One change - this approval and telemetry doesn't happen during the boot loading process. It's just logged and skipped.
Once bootup is done, the EDR app auto starts, checks logs for anomalies and sends telemetry over whenever network is available (it usually is, because they update malware signatures etc frequently). Someone at the company gets paged, they fix and the process continues.
Windows could sure handle this kind of error better, but IMHO it would be a mistake to require Microsoft to absolutely block any path Windows could be crashing due to third party software.
We'd end in a situation similar to Mac OS where there's a single gatekeeper and whole industries are subjected to the will of the platform owner.
Enterprises have chosen Windows because of that flexibility and control, while having a business partner they don't get with linux. If anything the blame should fall on them for getting hosed even as they fully had the means to avoid that situation.
I don't think "Microsoft should lock down Windows so hard" is the solution we want here. I don't want my desktop OS to be a walled garden like iOS is. I want to be able to install software on it that does anything I need to be able to do -- and yes, having that capability to run software at the lowest possible level in the OS does also mean that that software has extra responsibility to be well-behaved, as the OS can't protect the system from it. But I still would rather have that option than not have it (and also I wouldn't use CrowdStrike).
How did Microsoft put it on the Windows critical path? (Informational question—I’m not following the issue super closely, but I thought CrowdStrike was a third-party system. Crowdstrike was wrong to put so much code in the kernel. Microsoft was reportedly legally bound to provide this access and allow third-party code to run in the kernel.)
Microsoft added a feature to Windows that allows specially-signed antimalware drivers to be loaded extremely early in the boot sequence and be marked as non-optional. The idea is to give antimalware drivers the opportunity to load first, before anything else has had the chance to start.
Furthermore, if a driver is marked as optional and crashes, Windows can reboot with that optional driver disabled next time, preventing infinite crash/boot loops. Obviously that's no good if your antimalware driver gets disabled, so they can mark theirs as "required." Obviously in the CrowdStrike case, we got the worst of both worlds.
Microsoft is not who made the decision to put this on Windows' critical path; CrowdStrike was. Nothing stops you from running whatever dodgy third-party kernel modules you like on Linux or FreeBSD and they could easily cause the same sort of problem.
In fact, CrowdStrike has taken down Linux systems in much the same way in the past year (in April I think). It's just that the impact was less widespread.
Linux yes, but *BSD systems have microkernel architecture, so must be more resilient to failures of one of the components. Although I have no idea whether the full system would boot either, I'm pretty sure it could partially load, give more information to user, and make it easier to fix.
Partially agree. Linux yes, but *BSD systems have microkernel architecture, so must be more resilient to failures of one of the components. Although I have no idea whether the full system would boot either, I'm pretty sure it could partially load, give more information to user, and make it easier to fix.
To be fair, AFAIK the CrowdStrike driver was WHQL-certified. The loophole is that the driver loaded files at runtime, which made it impossible to predict every failure scenario.
Maybe this is the loophole that needs closing. You can't claim a driver is certified for Windows if the manufacturer can push arbitrary files that change its behavior. Especially if that manufacturer has sloppy development practices.
I understand that a primary goal of endpoint monitoring software is to be able to quickly react to new threats, and that the turn around time for Windows certification is surely unacceptable in this scenario, but this functionality can never be allowed to jeopardize the stability of the system it's supposed to protect. So it's ultimately on Microsoft to fix this for their users.
Ironically, this is exactly the failure pattern that the changes in Chrome extensions to manifest v3 try to prevent. You can't provide a guarantee to the end-user of pre-vetted safety when the application is downloading and executing arbitrary code from a third-party source. That's like expecting a static code verifier to prevent all runtime errors.
It is, perhaps, a guarantee that no vendor should be expected to make.
> You can't provide a guarantee to the end-user of pre-vetted safety when the application is downloading and executing arbitrary code from a third-party source.
So a web browser can't be trusted or certified, ever. Unless JavaScript is disabled?
Correct, and I should have been more clear. By the nature of what they do, Chrome extensions operate outside the sandbox designed to make attacking the underlying operating system running the browser very hard.
Sandboxing is such a way to attempt to enforce a guarantee (modulo sandbox bugs, of course). Since crexs aren't entirely in the sandbox, vetting and signoff is supposed to provide the added assurance of security the sandbox can't provide. And those assurances are hollow when the vetted crex is running arbitrary code from a third-party source.
In the article it states that Microsoft HAD to allow Crowdstrike to run in kernelspace by EU laws, because else MS would have the monopoly on kernel-level security solutions / integrations.
They probably had to, in the same way that banks had to use crowdstrike. Much as it's easy for banks to say "we use crowdstrike, like everyone else" rather than implement a bespoke and accountable framework for risk assessment and mitigation for every type of endpoint use case (and argue that case to both the auditor and regular). In this case it's easier for Microsoft to say "see, they can run in kernel space" rather than provide a bunch of API functions that achieve what's needed, convince all third party vendors to use them, and put in place a process to convince an auditor that Microsoft security software will never use any knowledge or functionality from the OS outside this.
I guess I don't think that's the sole reason, as I think the incentives would still be in place even if Microsoft authored security software did not run anything in kernel space.
You're spilling cheap propaganda. Microsoft likely never had[0] an appropriate userland-level API in place and them blaming the EU should not be repeated by someone calling themselves a journalist.
[0] https://www.youtube.com/watch?v=EGttFWntctU - I need to state here that I do not possess the level of knowledge the author of video presents and therefore am unable to confirm findings included in the video
And we're back to Microsoft -- they are responsible for not having a proper way to handle such third-party apps, nor they maintained a process and controls to prevent such rogue breaking updates.
Let me exaggerate a bit to show how bad that analogy is:
Let's say I've developed an laptop that bricks whenever you open a website with incorrectly formatted HTML.
Not sure how to adapt your bike analogy to this... Let's say you made a bike that's intended to be ridden outdoors, but breaks down whenever user sits on indoors. Yea, no one is supposed to ride it indoors. Not sure it's the best analogy though.
UPDATE: let's say the bike breaks down completely whenever it's ridden in the rain.
Same with Linux yes, I never said Linux is any better in this question than Windows. At least it's free, and no warranties is given. But if RedHat had failed the same way, I think ReHat Inc would bear the blame just as well.
PS: I believe BSD-based systems would be more resilient because of microkernel architecture.
Isn't corporate malware by definition on the "critical path"? The article outlines the reasons why that jank runs in kernel space, and why MS is unable to "downgrade" it to userspace.
This is the comment I expected, begging to handover your freedoms to run software to a big carry.
If you replace parts in your BMW, and put in some garbage or incompatible parts, it your fault if it doesn’t run.
You expect to sue your mechanic if he messed up, and for him to cover the full cost. For some reason people do not expect CrowdStrike to pay for their stupidity, which is the root of the problem. And the management that installed crowdstrike without due diligence
This is the comment I expected, begging to handover your freedoms to run software to a big carry.
If you replace parts in your BMW, and put in some garbage or incompatible parts, it your fault if it doesn’t run.
You expect to sue your mechanic if he messed up, and for him to cover the full cost. For some reason people do not expect CrowdStrike to pay for their stupidity, which is the root of the problem. And the management that installed crowdstrike without due diligence
Bit it wasn't some garbage parts in a car, it was an app. And apps fail all the time, OS is expected to handle that. Same as car is expected to handle rain for example.
The EU's rules are that Microsoft can't hoard APIs away from competitors, not that they have to give competitors a kernel driver SDK. If Microsoft says Windows Defender needs a kernel driver, then CrowdStrike gets to ship a kernel driver, too.
Microsoft, interestingly enough, is working on a project to add an eBPF[0] runtime to the NT kernel. If they were to use this for their own security products then I doubt the EU would prohibit them from transitioning third-party security products to eBPF programs. Antitrust and competition law do not care about specific technical measures competitors use to compete, just that dominant companies are not shutting competitors out of markets.
[0] Formerly "extended Berkley Packet Filter", eBPF lets you run safety-verified code in kernel space. Notably, the verifier isn't just a signing check, it can actually ensure the code won't crash the kernel directly.
Yes and no. As others have pointed out above, it is factually correct that they were forced by the EU to give access to kernelspace. However, it is also true that the only reason for that was that _they_ were using kernelspace for the same things (instead of creating a framework and API into the features needed).
Microsoft didn't write the Falcon sensor software nor did they put it in the kernel. In fact, Microsoft has been shouting to the heavens trying to shift the blame from CrowdStrike onto the European Commission, because they want people to irrationally hate antitrust so they can turn Windows into shitty iOS and monopolize the security market (and applications market) for it.
Furthermore, Microsoft does actually have some rules regarding what you can and can't put into a signed kernel driver. Specifically, they won't sign kernel code unless they've seen and tested it first. CrowdStrike deliberately circumvented this rule by implementing their own configuration format - really, just a fancy way of loading code into the kernel that Microsoft doesn't have signing control over.
If there is blame to be had here for Microsoft, maybe it's that their kernel code signing program doesn't scrutinize third-party configuration formats hard enough. I mean, if you sign a code loader, you're really signing all possible programs, making code signing irrelevant. And configuration is more often than not, code in a trenchcoat. It's often Turing-complete, and almost certainly more complicated than the actual programming languages used to write the compiled code being signed off on.
But at the same time I imagine Microsoft tried this and got pushback. That might be why they feel (incorrectly) like they can blame the EU for this. Every third-party security solution does absolutely unspeakable things in kernel space that no one with actual computer science training would sign off on, using configuration to wrestle signing control away from Microsoft. Remember: Crowdstrike is designed to backdoor Windows systems so that their owners know if an attack has succeeded, not to make them more secure from attacks in the first place. Corporations are states[0], and states fundamentally suffer from poor legibility: they own and operate far too much stuff for a tribe[1] of humans to meaningfully control or remember.
The problem is that we have two different entities that all have the ability to stop this madness. When states run into this situation, they impose "joint and several liability", which means "I don't care how we precisely assign blame, I'm just going to say you all caused it and move on". In other words, it's Microsoft's fault and it's CrowdStrike's fault.
[0] ancaps fite me
[1] Maximally connected social graph with node degree below Dunbar's number.
> because they want people to irrationally hate antitrust
One only needs to look at what's happening with Google's privacy sandbox to know the perils of antitrust with regard to introducing new interfaces. Even though Google has offered new interfaces and APIs that they themselves intend to migrate to (and take a ~20% revenue reduction), they've attracted the scrutiny of regulators who claim that this is a way of locking out competitors in the advertising space.
> [0] ancaps fite me
This part is simply inciting a flamewar, and something that you can do without in the spirit of the website guidelines[1].
It's important to remember that every other browser dropped third-party cookie support years before Chrome did. Google dragged their feet on it until they could come up with a solution that would give Google the same level of tracking, because Google is an advertising company. So the competition authorities are telling Google - and only Google - that they can't drop third-party cookies anymore.
I've never actually heard anyone claim Privacy Sandbox[0] APIs would give third-party ad networks the same level of tracking as Google. But I imagine even if they did, the APIs would probably be a poor fit for competing ad networks, in the same way that, say, the iOS File Provider APIs are a terrible fit for Dropbox[1].
There are three different ways you can introduce a new standard or interface:
- You can go to or form a standards body with all the relevant market players and agree on a technical specification for that interface. This is preferred, and it's how the Web is usually done.
- You can take a competitor's interface people are already using and adopt that. This is how you get de-facto standards, and while they might have loads of technical problems[2], none of them give you an unfair market advantage.
- You can make your own interface and force competitors to adopt that. You get all the technical problems of a de-facto standard, but those are all problems your competition has to deal with, not you.
The difference is a matter of market advantage. Out of all the major browser vendors, only Google has dominance in online marketing. Microsoft and Apple would like to have a piece of that pie, but they all dropped third-party cookies without tying it to their own competing standards that they wanted to force other people to use.
[0] Hell of an Orwellian name
[1] For example, if you use Dropbox as your file storage, you can't pick folders. At all. On an operating system built by the company whose engineers are obsessed with bundles (directories that look and act like files instead of folders).
The driver is some kind of AV/Signature detection hook. E.g check every open() for this list of checksums and refuse to open known viruses style system. The 'update' was a borked definition file which triggered a bug in that system.
It's not code execution without signing, and I think probably they do want these files to be updated hands free.
The real problem was the lack of testing, rather than the actual mechanism I think.
This is the nugget of the issue. The code-signing process, in this case, was abused to verify something that, fundamentally, cannot give the guarantee "Doesn't crash your OS" because it is allowed to run arbitrary code in the form of novel commands in what is essentially a DSL. So if code-signing is supposed to be a guarantee from MS that "this code can't crash your system," it should never have been signed... But then MS would have been on hooks for blocking a competitor.
To get a driver signed by Microsoft, the developer of the driver is required to provide a full cert pass log from the Windows Hardware Lab Kit to dev center [0]. Do you have any article that says the CrowdStrike driver has been tested by Microsoft?
To avoid going through the full cert process the sensor was certified but it loaded code from an uncertified module too so that it could be quickly updated to catch new threats. It's a tough corner to be in, to function properly it needs to update very quickly but the cert process takes a while to complete so they went with this work around of a signed module loading uncertified code.
...you want Microsoft to forbid you from running certain kinds of programs on your own machine, even if you really, really insist on it, do I understand you correctly?
More like: "...you want Microsoft to forbid you from running certain kinds of programs (with gaping security holes / processes) on your own machine" YES
You're moving the goal post waaaay far down. How about just following best practices? How about not allowing runtime code injection? Turns out security holes often have much in common, and with ways to mitigate them. Stop 100% of security holes? nah. Stop 99.9% of security holes? Yes and what an improvement.
This is a valid opinion and I don't know why you were downvoted (well other than the hacker news bubble mindset (or mindless-set).
How is Microsoft not to blame, it's their product? We wouldn't blame a Toyota supplier for a failure in a car, but we somehow segment that in the software world?
Toyota chose the supplier, worked with them on the specs and designs, and put it in their OE car delivered to the customer. It has Toyota's name on it, it was bought at a Toyota dealership, is a part of Toyota's warranty.
Crowdstrike is entirely optional software that doesn't come from Microsoft. Microsoft doesn't market it. Microsoft had no hand in making it. Microsoft doesn't sell it. Microsoft had no hand in a user installing Crowdstrike.
No. My point is that Microsoft allows the damn thing to be ran in kernel space. Mac, linux don't have this problem due to how THEY architected the system. Yes I think that puts Microsoft at blame.
Microsoft should have no say to decide what software I am allowed to run on my computer.
> Mac, linux don't have this problem due to how THEY architected the system.
You're joking right? You're arguing kernel panics can't happen on Linux? FFS, the CrowdStrike sensor caused kernel panics on multiple Linux distros in the last few months! Linux is not immune to kernel panics for buggy kernel modules.
The first point is pretty philosophical so I'm not gonna go far into that. At the end of the day you bought a product from a company, some of those products have a way to load programs on and some are locked down (a microwave). "should" is pretty biased whether I agree with that conclusion or not.
Two: Here I'm not arguing about what's possible but rather what happened in the real world. 8.5 M machines down, my org runs Macs, we knew about it from the news...
No smarty pants, I'm arguing that you can't load a program on a microwave's microprocessor. Should I be able to do that?
"And yes, in the real-world, third-party software can and does cause Macs to crash." Thanks for adding so much to the conversation (eyes rolled).
In the absolute sense 8.5M machines is a lot. Airlines down is a lot. Hospitals down is a lot. Hey we guarantee we won't wreck 99.4% of our machines out there! is not a good guarantee.
Yes, you are arguing for that microwave when you argue Microsoft should approve the software you're allowed to run on a Windows box and be liable for its performance. Should Microsoft also have to approve what browsers you're allowed to run, should they approve what chat applications you're allowed to use?
And sure, why shouldn't you be able to modify the software on hardware you own? It's your microwave. If you modify the software on it and that causes it to burn up don't go to the manufacturer when it burns your house down. But that's true if you open it up and rewire it as well. Which, sure, feel free to open it up. It is your microwave.
Are you arguing you shouldn't be able to modify the things you own?
> Thanks for adding so much to the conversation
I mean it seriously seems like you're arguing MacOS and Linux are immune to third party software crashing the system. Do you agree or disagree that third party software can cause MacOS and Linux instability, especially when the user chooses to run it at root level permissions?
> we guarantee we won't wreck
Microsoft didn't wreck these machines. CrowdStrike wrecked these machines. Every Windows machine that did not have CrowdStrike installed was unaffected by this, which is 99.4% of Windows machines.
> what happened in the real world
And yes, look at those bug reports, those are crashes happening in the real world not something theoretical. Kernel panics happen!
I'm not making an analogy with the microwave (your saying food is software and the microwave is hardware) I'm literally talking about the software that runs on a microwave.
I'm aware of the point you're trying to make with the microwave. I'm making another analogy; one you're not getting. And either way, yes, I think you should be able to change the software on the microwave. It is your microwave. Do whatever you want with it. Why should Samsung or GE have the right to say what you can or cannot do with the things you own?
If we want to talk microwaves, Microsoft is the microwave manufacturer. Users installing CrowdStrike are people sticking a giant ball of foil and paper towels in the microwave and turning it on for an hour. You're arguing Microsoft is liable for the things people stick in their microwaves, and that Microsoft should put in place guards to prevent people from putting whatever they want in their own microwaves. That Microsoft should control the things people put in their microwaves. Only Microsoft tested and Microsoft approved foods in Microsoft microwaves. And the microwave needs to ensure only the proper cook time applies to the properly signed food products to make sure it doesn't get burnt. Sorry, Microsoft hasn't fully validated Red Gold potatoes, it can only cook Russet potatoes.
That is the same logic as Microsoft is liable for the third-party software people install on Windows machines and that Microsoft shouldn't have allowed the third-party software to run.
Why should Microsoft be able to say what antivirus software I choose to install or not? Why should Microsoft be able to say what browser I install? If I install some software that breaks my Windows machine, is that the faut of Microsoft or the fault of the software maker? If I stick foil in the microwave is the ensuing fire GE's fault?
> Another way is if it has recently joined a botnet orchestrated from a geopolitical adversary of the United States after one of your junior programmers decided to install warez because the six figure annual salary was too little to fund their video game habit.
Fictional statements like this make me reluctant to read further, and ignore source of such "news" in the future.
It's obviously fictional, but let's call it contemporary drama based on a true story. I thought the point was well made. The author already noted this was a handwaving segment.
> money is core societal infrastructure, like the power grid and transportation systems are. It would be really bad if hackers working for a foreign government could just turn off money.
Sure, it would be inconvenient in the short term. But I think the current design is holding us back.
I suspect that most of us would have more to gain than to lose if we managed to shut off money-as-we-know-it and keep it off for long enough to iterate on alternatives. Any design that even tried to step beyond "well that's how we've always done it" would likely land somewhere better than what we're doing. Much has changed since Alexander Hamilton.
In the early 90's Russia, essentially, voided almost all of the Soviet money that remained in monetary system (most of which were bank deposits; they simply vanished with zero compensation), allowing rather small upper limit on the amount of old Soviet roubles one person was allowed to exchange for the new Russian roubles.
Believe it or not, that really did not help the low and low-middle classes with their growing financial problems; and the upper-middle and top classes mostly operated in dollars (or less often, in deutschmarks) by this time anyhow, so that didn't inconvenience them much at all.
Losing access to one currency but not others is quite a different thing, I don't think that would help anybody.
What I think would help is something that evolved in a less stable computing environment. Something which had to be partition tolerant. Such a thing would have to remain more closely coupled with the consent and merits of its participants because it would lack a reliable connection to a far away authority (currently used to uphold the wishes of extraneous parties to the transaction). Something like local-first software, but for money.
Probably not. A competent government could install temporary rationing for the most essential goods such as food. It happened through the the whole of the 1917—1920 Russian revolution, with four or five kinds of paper money being circulated around, and the urban population managed through it only if barely. That government was much less competent than the US government is today.
In the rural areas, mind you. That's one of the most appalling thing about famines in the XIX-XX, that they hit the countryside heavier than they hit the cities.
I agree there needs to be more competition, but that doesn't mean you need to get rid of the old way. It is better when two approaches run in parallel, to compensate the other's shortcomings.
That would indeed be ideal: one as a backup for the other, and when both are functioning, chose the one that suits you best. I just think that it's outages that will convince us that we need this... stakeholders in the status quo certainly aren't going to do it.
The uber-wealthy don't have most of their assets in currency. Its in stocks, houses, cars, boats, etc. Delete the dollars, it'll hurt them a bit, but in the end they still have a house(es).
But now all those people who were using currency to trade for housing now suddenly need to find a new way to trade for shelter.
I'm not going to try to lay down the exact parameters of what we'd come up with in money's place, but if it's going to be resilient in the face of far-away servers behaving badly then it would have to derive it's legitimacy not from some shiny golden ledger of who owns which dollar, house, or car, but instead from who is behaving in a way that benefits the people around them.
So yeah, it could go as you say, but only of the wealthy are behaving in a way that justifies their outsized share while the renters are just spending from a pile of money that they got through less honorable means.
I don't think that's the most likely scenario though.