You are completely right. BTW It wasn't a software update, it was a content update, a 'channel file'.
Someone didn't do enough testing. edit: or any testing at all?
It's an automatic update of the product. Semantic "channel vs. binary" doesn't indicate anything. If your software's definition files can cause a kernel mode driver to crash in a bootloop you have bigger problems, but the outcome is the same as if the driver itself was updated.
Indeed. Its worse really, it means there was a bug lurking in their product that was waiting for a badly formatted file to surface it.
Given how widespread the problem is it also means they are pushing these files out without basic testing.
edit: It will be very interesting to see how CrowdStrike wriggle out of the obvious conclusion that their company no longer deserves to exist after a f*k up like this.
That's funny, because IIRC McAfee back in the Windows XP days did this exact same thing! They added a system file to the signature registry and caused Windows computers to BSOD on boot.
That’s even worse—-they should be fuzz testing with bad definitions files to make sure this is safe. Inevitably the definitions updates will be rushed out to address zero days and the work should be done ahead of time to make them safe.
Having spent time reverse-engineering Crowdstrike Falcon, a lot of funny things can happen if you feed it bad input.
But I suspect they don't have much motivation to make the sensor resilient to fuzzing, since the thing's a remote shell anyways, so they must think that all inputs are absolutely trusted (i.e. if any malicious packet can reach the sensor, your attackers can just politely ask to run arbitrary commands, so might as well assume the sensor will never see bad data..)
This is something funny to say when the inputs contain malware signatures, which are essentially determined by the malware itself.
I mean, how hard would it be to craft a malware that has the same signature as an important system file? Preferably one that doesn't cause immediate havoc when quarantined, just a BSOD after reboot, so it slips through QA.
Even if the signature is not completely predictable, the bad guys can try as often as they want and there would not even be way to detect these attempts.
> malware signatures, which are essentially determined by the malware itself.
No they're not. The tool vendor decides the signature, they pick something characteristic that the malware has and other things don't, that's the whole point.
> how hard would it be to craft a malware that has the same signature as an important system file?
Completely impossible, unless you mean, like, bribe one of the employees to put the signature of a system file instead of your malware or something.
Sure, but they do it following a certain process. It's not that CrowdStrike employees get paid to be extra creative in their job, so you likely could predict what they choose to include in the signature.
In addition to that, you have no pressure to get it right the first time. You can try as often as you want and analyzing the updated signatures you even get some feedback about your attempts.
Like, «We require that your employees opens only links on white list, and social networks cannot be put on this list, and we require managed antivirus / firewall solution, but we are Ok that this solution has backdoor directly for 3rd party organization»?
It is crazy. All these PCI DSS and SOC2 looks like a comedy if they allow such things.
At a former employer of about 15K employees, two tools come to mind that allowed us to do this on every Windows host on our network[0].
It's an absolute necessity: you can manage Windows updates and a limited set of other updates via things like WSUS. Back when I was at this employer, Adobe Flash and Java plug-in attacks were our largest source of infection. The only way to reliably get those updates installed was to configure everything to run the installer if an old version was detected, and then find some other ways to get it to run.
To do this, we'd often resort to scripts/custom apps just to detect the installation correctly. Too often a machine would be vulnerable but something would keep it from showing up on various tools that limit checks to "Add/Remove Programs" entries or other mechanisms that might let a browser plug-in slip through, so we'd resort various methods all the way down to "inspecting the drive directory-by-directory" to find offending libraries.
We used a similar capability all the way back in the NIMDA days to deploy an in-house removal tool[1]
[0] Symantec Endpoint Protection and System Center Configuration Manager
[1] I worked at a large telecom at that time -- our IPS devices crashed our monitoring tool when the malware that immediately followed NIMDA landed. The result was a coworker and I dissecting/containing it and providing the findings to Trend Micro (our A/V vendor at the time) maybe 30 minutes before the news started breaking and several hours before they had anything that could detect it on their end.
Hilariously, my last employer was switching to Crowdstrike a few months ago when my contract ended. We previously used Trellis which did not have any remote control features beyond network isolation and pulling filesystem images. During the Crowdstrike onboarding, they definitely showed us a demo of basically a virtual terminal that you could access from the Falcon portal, kind of like the GCP or AWS web console terminals you can use if SSH isn't working.
As I understand, this only manifests after a reboot and if the 'content update' is tested at all it is probably in a VM that just gets thrown away after the test and is never rebooted.
Also, this makes me think:
How hard would it be to craft a malware that has the same signature as an important system file?
Preferably one that doesn't cause immediate havoc when quarantined, just a BSOD after reboot, so it slips through QA.
I don't believe this is what's happened, but I think it is an interesting threat.
Nope, not after a reboot. Once the "channel update" is loaded into Falcon, the machine will crash with a BSOD and then it will not boot properly until you remove the defective file.
> How hard would it be to craft a malware that has the same signature as an important system file?
Very, otherwise digital signatures wouldn’t be much use. There are no publicly known ways to make an input which hashes to the same value as another known input through the SHA256 hash algorithm any quicker than brute-force trial and error of every possibility.
This is the difficulty that BitCoin mining is based on - the work that all the GPUs were doing, the reason for the massive global energy use people complain about is basically a global brute-force through the SHA256 input space.
I was talking about malware signatures, which do necessarily use cryptographic hashes. They are probably more optimized for speed because the engine needs to check a huge number of files as fast as possible.
Cryptographic hashes are not the fastest possible hash, but they are not slow; CPUs have hardware SHA acceleration: https://www.intel.com/content/www/us/en/developer/articles/t... - compared to the likes of a password hash where you want to do a lot of rounds and make checking slow, as a defense against bruteforcing.
That sounds even harder; Windows Authenticode uses SHA1 or SHA256 on partial file bytes, the AV will use its own hash likely on the full file bytes, and you need a malware which matches both - so the AV will think it's legit and Windows will think it's legit.
AFAIK important system files on Windows are (or should be) cryptographically signed by Microsoft. And the presence of such signature is one of the parameters fed to the heuristics engine of the AV software.
> How hard would it be to craft a malware that has the same signature as an important system file?
If you can craft malware that is digitally signed with the same keys as Microsoft's system files, we got way bigger problems.
>How hard would it be to craft a malware that has the same signature as an important system file?
Extremely, if it were easy that means basically all cryptography commonly in use today is broken, the entire Public Key Infrastructure is borderline useless and there's no point in code signing anymore.
Admittedly, I don't know exactly what's in these files. When I hear 'content' I think 'config'. This is going to be very hypothetical, I ask for some patience. Not arguments.
The 'config file' parser is so unsafe that... not only will the thing consuming it break, but it'll take down the environment around it.
Sure, this isn't completely fair. It's working in kernel space so one misstep can be dire. Again, testing.
I think it's a reasonable assumption/request that something try to degrade itself, not the systems around it
edit: When a distinction between 'config' and 'agent' releases is made, it's typically with the understanding that content releases move much faster/flow freely. The releases around the software itself tend to be more controlled, being what is actually executed.
In short, the risk modeling and such doesn't line up. The content updates get certain privileges under certain (apparently mistaken) robustness assumptions. Too much credit, or attention, is given to the Agent!
https://x.com/George_Kurtz/status/1814235001745027317
https://x.com/brody_n77/status/1814185935476863321