Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

We are a major CS client, with 50k windows-based endpoints or so. All down.

There exists a workaround but CS does not make it clear whether this means running without protection or not. (The workaround does get the windows boxes unstuck from the boot loop, but they do appear offline in the CS host management console - which of course may have many reasons).



Does CS actually offer any real protection? I always thought it was just feel-good software, that Windows had caught up to separating permissions since after XP or so. Either one is lying/scamming, but which one?


> Does CS actually offer any real protection? I always thought it was just feel-good software, that Windows had caught up to separating permissions since after XP or so. Either one is lying/scamming, but which one?

Our ZScaler rep (basically, they technically work for us) come out with massive impressive looking numbers of the thousands of threats they detect and eliminate every month

Oddly before we had zscaler we didn't seem to have any actual problems. Now we have it and while we have lots of zscaler caused problems around performance and location, we still don't have any actual problems.

Feels very much like a tiger repelling rock. But I'm sure the corporate hospitality is fun.


AFAIK, most of the people I know that deploy CrowdStrike (including us) just do it to check a box for audits and certifications. They don't care much about protections and will happily add exceptions on places where it gives problems (and that's a lot of places)


What a dream business.


It's not about checking the boxes themselves, but the shifting of liability that enables. Those security companies are paid well not for actually providing security, but for providing a way to say, "we're not at fault, we adhered to the best security practices, there's nothing we could've done to prevent the problem".


So in essence just a flavor of insurance?

Shouldn't that hit Crowdstrike's stock price much more than it has then? (so far I see ~11% down which is definitely a lot but it looks like they will survive).


Not quite. Insurance is a product that provides compensation in the event of loss. Deploying CrowdStrike with an eye toward enterprise risk management falls under one of either changing behaviors or modifying outcomes (or perhaps both).


They are paying with reputation and liability, no?

If the idea is "we'll hire Crowdstrike for CYA" when things like this happen, the blame is on CS and they pay with their reputation.


Pay for what exactly though? Cybersecurity incidents result in material loss, and someone somewhere needs to provide dollars for the accrued costs. Reputation can't do that, particularly when legal liability (or, hell, culpability) is involved.

EDR deployment is an outcome-modifying measure, usually required as underwritten in a cybersecurity insurance policy for it to be in force. It isn't itself insurance.


Not at all like insurance, because they don’t have to pay out at all when things go wrong.


In a way it's the new "nobody ever got fired for buying IBM".


You're so right. Pay us ludicrous sums to make your auditors feel good. Crazy.


So much of regulation is just a round-about to creating business for regulatory compliance.


Good for profits, but it bet there are some employees who feel a distinct lack of joy about their work.


Just adding my two cents: I work as a pentester and arguably all of my colleagues agree that engagements where Crowdstrike is deployed are the worst because it's impossible to bypass.


It definitely isn't impossible to bypass. It gets bypassed all the time, even publicly. There's like 80 different CrowdStrike bypass tricks that have been published at some point. It's hard to bypass and it takes skill, and yes it's the best EDR, but it's not the best solution - the best solution is an architecture where bypassing the EDR doesn't mean you get to own the network.

An attacker that's using a 0 day to get into a privileged section in a properly set up network is not going to be stopped by CrowdStrike.


Or you're a pentester playing 4D chess with your comment.

Or a CS salesperson playing 3D chess with your comment.

If so, well played!


By “impossible to bypass” are you meaning that it provides good security? Or that it makes pen testing harder because you need to be able to temporarily bypass it in order to do your test?


The first. AV evasion is a whole discipline in itself and it can be anything from trivial to borderline impossible. Crowdstrike definitely plays in the champions league.


[flagged]


I don't appreciate your aggressive tone. Which AV is better in your opinion? Are there a lot?


You should be asking how to get through, not what competitor is better. You totally sound like a marketing rep now.


That's not what the discussion was about. If you don't think crowdstrike qualifies as one of the best, justify your opinion.


Best not to respond to trolls.


How can one gets through? I'm sure the knowledge costs gold though.


valuable 2 cents

is there any writeups from the pentesting side of things that we can read to learn more?


I’ll say this: I did a small lab in college for a hardware security class and I got a scary email from IT because CrowdStrike noticed there was some program using speculative execution/cache invalidation to leak data on my account - they recognized my small scale example leaking a couple of bytes. Pretty impressive to be honest.


Did you have CrowdStrike installed on your personal machine, or did they detect it over the network somehow?


We ran our code on our own accounts on the school’s system.


Those able to write and use FUD malware do not create public documentation. Crowdstrike is not impossible to bypass, but for a junior security journeyman known as a pentester, working for corporate interests with no budget and absurdly limited scopes under contract for n-hours a week for 3 weeks will never be able to do anything as simple as an EDR evasion, however if you wish to actually learn the basics the common practitioner of this art please go study the offsec evasion class. Then go read a lot of code and syscall documentation and learn assembly.


I don't understand why you were downvoted. I'm interested in what you said. When you mentioned offsec evasion class, is this what you mean? It seems pretty advanced.

https://www.offsec.com/courses/pen-300/

What kind of code should I read? Actually, let me ask this, what kind of code should I write first before diving into this kind of evasion technique? I feel I need to write some small Windows system software like duplicating Process Explorer, to get familiar with Win32 programming and Windows system programming, but I could be wrong?

I think I do have a study path, but it's full of gap. I work as a data engineer -- the kind that I wouldn't even bother to call myself engineer /s


I know quite a few offensive security pros that are way better than I will ever be at breaking into systems and evading detections that can only barely program anything beyond simple python scripts.

It’s a great goal to eventually learn everything, but knowing the correct tools and techniques and how and when to use them most effectively are very different skillsets from discovering new vulnerabilities or writing new exploit code and you can start at any of them.

Compare for instance a physiologist, a gymnastics coach, and an Olympic gymnast. They all “know how the human body works” but in very different ways and who you’d go to for expertise depends on the context.

Similarly just start with whatever part you are most interested in. If you want to know the techniques and tools you can web search and find lots of details.

If you want to know how best to use them you should set up vulnerable machines (or find a relevant CTF) and practice. If you want to understand how they were discovered and how people find new ones you should read writeups from places like Project Zero that do that kind of research. If you’re interested in writing your own then yes you probably need to learn some system programming. If you enjoy the field you can expand your knowledge base.


EDR vendors are generally lying unless they tell you anything but "install us if you want to pass certification".


My contacts abroad are saying "that software US government mandated us to install on our clients and servers to do business with US companies is crashing our machines".

When did Crowdstrike get this gold standard super seal of approval? What could they be referring to?


[flagged]


> if it's not 100% secure

It's 100% broken though...

I guarantee you that the damage caused by Crowdstrike today will significantly outweigh any security benefits/savings that using their software might have had over the years.


The benefits include

1) Nice trips to golf courses before contract renewal

2) Nice meals at fancy restaurants before contract renewal

3) Someone for the CTO to blame when something goes wrong


Nah, more like your security/usability/reliability tradeoff needs to be better.


As a redteamer I guarantee you that a Windows endpoint without EDR is caviar for us...


Are there publicly known exploits which allow RCE or data extraction on a default windows installation?


* SMB encryption or signing not enforced

* NTLM/NTLMv1 enabled

* mDNS/llmnr/nbt-ns enabled

* dhcpv6 not controlled

* Privileged account doing plain LDAP (not LDAPS) binds or unencrypted FTP connections

* WPAD not controlled

* lights out interfaces not segregated from business network. Bonus points if its a supermicro which discloses the password hash to unauthenticated users as a design features.

* operational technology not segregated from information technology

* Not a windows bug, but popular on windows: 3rd party services with unquoted exe and uninstall strings, or service executable in a user-writable directory.

I remediate pentests as well as realworld intrusion events and we ALWAYS find one of these as the culprit. An oopsie happening on the public website leading to an intrusion is actually an extreme rarity. It's pretty much always email > standard user > administrator.

I understand not liking EDR or AV but the alternative seems to be just not detecting when this happens. The difference between EDR clients and non-EDR clients is that the non-EDR clients got compromised 2 years ago and only found it today.


Thanks for the list. I got this job as the network administrator at a community bank 2 years ago and 9/9 of these were on/enabled/not secured. I've got it down to only 3/9 (dhcpv6, unquoted exe, operational tech not segregated from info tech). I'm asking for free advise, so feel free to ignore me, but of these three unremediated vectors, which do you see as the culprit most often?


dhcpv6 poisoning is really easy to do with metasploit and creates a MITM scenario. It's also easy to fix (dhcpv6guard at the switch, a domain firewall rule, or a 'prefer ipv4' reg key).

unquoted paths are used to make persistence and are just an indicator of some other compromise. There are some very low impact scripts on github that can take care of it

Network segregation, the big thing I see in financial institutions is the cameras. Each one has its own shitty webserver, chances are the vendor is accessing the NVR with teamviewer and just leaving the computer logged in and unlocked, and none of the involved devices will see any kind of update unless they break. Although I've never had a pentester do anything with this I consider the segment to be haunted.


None of those things require a kernel module with remote code execution to configure properly.


I believe the question was 'in which ways is windows vulnerable by default', and I answered that.

If customers wanted to configure them properly, they could, but they don't. EDR will let them keep all the garbage they seem to love so dearly. It doesn't just check a box, it takes care of many other boxes too.


At work we have two sets of computers. One gets beamed down by our multi-national overlords, loaded with all kinds of compliance software. The other is managed by local IT and only uses windows defender, has some strict group policies applied, BMCs on a separate vlans etc. Both pass audits, for whatever that's worth.


This is the key question for me: is there a way to get [most of] the security benefits of EDR without giving away the keys to the kingdom.


No. If an EDR relies on userland mechanisms to monitor, these userland mechanisms can easily be removed by the malicious process too.


> It's pretty much always email > standard user > administrator

What does this mean?


believe it or not, most users dont run around downloading random screensavers or whatever. Instead they are receiving phish emails, often from trusted contacts who have recently been compromised using the same style of message that they are used to receiving, that give the attacker a foothold on the computer. From there, you can use a commonly available insecure legacy protocol or other privilege escalation technique to gain administrative rights on the device.


standard user: why can't I open this pdf? It says Permission Denied

dumb admin: let me try .... boom game over man


It's the attack path.


>> always email > standard user > administrator

maybe its the boomers that can't give up Outlook? Otherwise they could've migrated everybody to google workspaces or some other web alternative.


You don't need exploits to remotely access and run commands on other systems, steal admin passwords, and destroy data. All the tools to do that are built into Windows. A large part of why security teams like EDR is that it gives them the data to detect abuse of built-in tools and automatically intervene.


Not on a fully patched system. 0-days are relatively rare and fixed pretty quickly by Microsoft.


Remember WannaCry? The vuln it used was patched by MS two months prior the attack. Yet it took the world by storm.


Not sure what you want from me, I simply answered the question. Yes I remember WannaCry.


How is it caviar then?


Not the same poster, but one phase of a typical attack inside a corporate network is lateral movement. You find creds on one system and want to use them to log on to a second system. Often, these creds have administrative privileges on the second system. No vulnerabilities are necessary to perform lateral movement.

Just as an example: you use a mechanism similar to psexec to execute commands on the remote system using the SMB service. If the remote system has a capable EDR, it will shut that down and report the system from which the connection came from to the SOC, perhaps automatically isolate it. If it doesn't, an attacker moves laterally through your entire network with ease in no time until they have domain admin privs.


A key part of breach a network is having a beacon running on their networks, and communicating out, one way or another.

Running beacons with good EDRs is difficult, and has become the most challenging aspect of most red team engagements because of that.

No EDR, everything becomes suddenly super easy.


Anyone who claims CS is nothing but a compliance checkbox has never worked as an actual analyst, of course it's effective...no, dur, its worth 50bn for no reason...god some people are stupid AND loud


Every company I’ve ever worked at has wound up having to install antivirus software to pass audits. The software only ever caused problems and never caught anything. But hey, we passed the audit so we’re good right?


The real scam is the audit.

Many moons ago, I failed a "security audit" because `/sbin/iptables --append INPUT --in-interface lo --jump ACCEPT`

"This leaves the interface completely unfiltered"

Since then, I've not trusted any security expert until I've personally witnessed their competence.


Long time ago I was working for a web hoster, and had to help customers operating web shops to pass audits required for credit card processing.

Doing so regularly involved allowing additonal ciphers for SSL we deemed insecure, and undoing other configurations for hardening the system. Arguing about it is pointless - either you make your system more insecure, or you don't pass the audit. Typically we ended up configuring it in a way that we can easily toggle those two states, and reverted it back to a secure configuration once the customer got their certificate, and flipped it back to insecure when it was time to reapply for the certification.


This tracks for me. PA-DSS was a pain with ssl and early tls... our auditor was telling us to disable just about everything (and he was right) and the gateways took forever to move to anything that wasn't outdated.

Then our dealerships would just disable the configuration anyway.

It's been better in recent years.


The dreaded exposed loopback interface... I'm an (internal) auditor, and I see huge variations in competence. Not sure what to do about it, since most technical people don't want to be in an auditor role.


The companies I had the displeasure of dealing with were basically run by mindless people with a shell script.


I agree completely. It makes me wonder if other engineering disciplines have this same competency issue.


We did this at one place I used to work at. We had lots of Linux systems. We installed clamAV but kept the service disabled. The audit checkbox said “installed” and it fulfilled the checkbox…


Yes, it offers very real protection. Crowdstrike in particular is the best in the market, speaking from experience and having worked with their competitor's products as well and responded to real world compromises.


How did they fail to test such a critical bug then ?

Clearly shows lack of testing.

If intially good, probably culture & products have rotten.

Not fit to be in security domain, if like this.


I think this is more of a failure on the software development side than the domain specific functionality side.


Hubris. Clearly they have no form of internal testing for updates because this should have been caught immediately.


"best in the market"

I think the evidence shows that no, they aren't.


Go buy the second-best in the market then. Red Team would love you to do that.


Yes, from experience, I can say that CS does offer real protection.


> "50k windows-based endpoints or so. All down."

I'm a dev rather than infra guy, but I'm pretty sure everywhere I've worked which has a large server estate has always done rolling patch updates, i.e. over multiple days (if critical) or multiple weekends (if routine), not blast every single machine everywhere all at once.


If this comment tree: https://news.ycombinator.com/item?id=41003390 is correct, someone at Crowdstrike looked at their documented update staging process, slammed their beer down, and said: "Fuck it, let's test it in production", and just pushed it to everyone.


Which of course begs the question: How were they able to do that? Was there no internal review? What about automated processes?

For an organization it's always the easiest, most convenient answer to blame a single scapegoat, maybe fire them... but if a single bad decision or error from an employee has this kind of impact, there's always a lack of safety nets.


Even if true, the orgs whose machines they are have the responsibility to validate patches.


This is not a patch per se, it was Crowdstrike updating their virus definition or whatever it's called internal database.

Such things are usually enabled by default to auto-update, because otherwise you lose a big part of the interest (if there's any) of running an antivirus.


Surely their should be at least some staging on update files as well, to avoid the "oops, we accidentally blacklisted explorer.exe" type things (or, indeed, this)?


Companies have staging and test process but CS bypassed it and deployed to prod.


If I understand the thread correctly, CS bypassed the organization's staging system


I'm guessing there's a lesson to be learned here.


This feels like an auto-update functionality. For something that's running in kernel space (presumably, if it can BSOD you?) Which is fucking terrifying.


Crowdstrike auto-updates. Please do not spread misinfo.


I think my company has more than 300k+ machines down right now :)

SLAs will be breached anyway


Windows IT admins of the world, now is your time. This is what you've trained for. Everything else has led to this moment. Now, go and save the world!!


"The world will look up and shout, "Save us!" and I'll whisper "No..." -- Rorschach


Or "log a ticket!"


Type in those Bitlocker recovery keys for as long as you can stay awake!


Or rather, go limp and demand to unionize!


Yeah, manual GUI work. Like any good MS product.


Or don't \o/


Probably go buy a mocha and cry in the corner :(


Does it require to physically go to each machine to fix it? Given the huge number of machines affected, it seems to me that if this is the case, this outage could last for days.


The workaround involves booting into Safe mode or Recovery environment, so I'd guess that's a personal visit to most machines unless you've got remote access to the console (e.g. KVM)

The info is apparently behind here: https://supportportal.crowdstrike.com/s/article/Tech-Alert-W...


That's crazy, imagine you have thousands of office PCs that all have to be fixed by hand.


It gets worse if your machines have bitlocker active, lots of typing required. And it gets even worse if your servers that store the bitlocker keys also have bitlocker active and are also held captive by crowstrike lol


I've already seen a few posts mentioning people running into worst-case issues like that. I wonder how many organizations are going to not be able to recover some or all of their existing systems.


Presumably at some point they'll be back to a state where they can boot to a network image, but that's going to be well down the pyramid of recovery. This is basically a "rebuild the world from scratch" exercise. I imagine even the out of band management services at e.g. Azure are running Windows and thus Crowdstrike.


Wow, why the fuck is that support article behind a login page.


Our experience so far has been:

• Servers, you have to apply the workaround by hand.

• Desktops, if you reboot and get online, CrowdStrike often picks up the fix before it crashes. You might need a few reboots, but that has worked for a substantial portion of systems. Otherwise, it’ll need a workaround applied by hand.


Even once in a boot loop, it can download the fix and recover?


Why the difference between servers and desktops ?


What happens if you've got remote staff?


The Dildo of Consequences rarely comes lubed, it seems.


This made me actually laugh out loud.


Surely i'ts not normal practice to allow patches to be rolled out without a staging/testing area on an estate of that size?


This is insane. The company I currently work for provides dinky forms for local cities and such, where the worst thing that could happen is that somebody will have to wait a day to get their license plates, and even we aren't this stupid.

I feel like people should have to go to jail for this level of negligence.


Which makes me think--are we sure this isn't malicious?


Unfortunately, any sufficiently advanced stupidity indistinguishable from malice.


As strange as it sounds, this just seems way to sophisticated to be malicious.


Maybe someone tried to backdoor Crowdstrike and messed up some shell code? It would fit and at this point we can't rule it out, but there is also no good reason to believe it. I prefer to assume incompetence over maliciousness.


The AI said it was ok to deploy


I blame the Copilot


True for all systems, but AV updates are exempt from such policies. When there is a 0day you want those updates landing everywhere asap.

Things like zscaler, cs, s1 are updating all the time, nearly everywhere they run.


>True for all systems, but AV updates are exempt from such policies. When there is a 0day you want those updates landing everywhere asap.

This is irrational. The risk of waiting for a few hours to test in a small environment before deploying a 0-day fix is marginal. If we assume the AV companies already spent their sweet time testing, surely most of the world can wait a few more hours on top of that.

Given this incident, it should be clear the downsides of deploying immediately at a global scale outweigh the benefits. The damage this incident caused might even be more than all the ransomware attacks combined. How long to take to do extra testing will depend on the specific organization, but I hope nobody will allow CrowdStrike trying to unilaterally impose a standard again.


It's incredibly bad practice, but it seems to be industry normal as we learned today.


I wonder if the move to hybrid estates (virtual + on prem + issued laptops etc) is the cause. Having worked in only on prem highly secure businesses no patches would be rolled out intra week without a testing cycle on a variety of hardware.

I consider it genuinely insane to allow direct updated from vendors like this on large estates. If you are behind a corporate firewall there is also a limit to the impact of discovered security flaws and thus reduced urgency in their dissemination anyway.


Most IT departments would not be patching all their servers or clients at the same time when Microsoft release updates. This is a pretty well followed standard practice.

For security software updates this is not a standard practice, I'm not even sure if you can configure a canary update group in these products? It is expected any updates are pushed ASAP.

For an issue like this though Crowdstrike should be catching it with their internal testing. It feels like a problem their customers should not have to worry about.


Their announcement (see Reddit for example) says it was a “content deployment” issue which could suggest it’s the AV definitions/whatever rather than the driver itself… so even if you had gradual rollout for drivers, it might not help!


It's definitely the driver itself if it blue screens the kernel. Quite possibility data-sensitive of course.


https://x.com/brody_n77/status/1814185935476863321 [0]

The driver can't gracefully handle invalid content - so you're kinda both right.

[0] brody_n77 is:

   Director of OverWatch,
   CrowdStrike Inc.


I came to HN hoping to find more technical info on the issue, and with hundreds of comments yours is the first I found with something of interest, so thanks! Too bad there's no way to upvote it to the top.


Looks like a great way to bypass crowd strike if I'm an adversary nation state


Anyone copy the original text? Now getting: > Hmm...this page doesn’t exist. Try searching for something else


I don’t have the exact copy, but it said it was a ‘channel file’ which was broken.


It might have been a long-present bug in the driver, yes, but today's failure was apparently caused by content/data update.


In most appreciations of risk around upgrades in environments with which i am familiar, changing config/static data etc counts as a systemic update and is controlled in the same way


You would lose a lot of the benefits of a system like crowdstrike if you waited to slowly roll out malware definitions and rules.


Survived this long without such convenience. anything worth protecting lives behind a firewall anyway


A bunch of unprotected endpoints all at once on critical systems… what could possibly go wrong?

Hope they role out a proper fix soon…


They did, around four hours ago.


A proper fix means that a failure like this causes you a headache, it doesn't close all your branches, or ground your planes, or stop operations in hospitals, or take your tv off air.

You do that by ensuring a single point of failure, like virus definition updates, or an unexpected bug in software which hits on Jan 29th, or when leapseconds go backwards, can't affect all your machines at the same time.

Yes it will be a pain if half your checkin desks are offline, but not as much as when they are all offline.


Except this actually was an opportunity for malicious actors https://www.theregister.com/2024/07/19/cyber_criminals_quick...


Wow that's terrible. I'm curious as to whether your contract with them allows for meaningful compensation in an event like this or is it just limited to the price of the software?


Are you rolling out CS updates as is everywhere? Are you not testing any published updates immediately at least with some N-1 staging involved?


Do you need to manually fix all your windows boxes? Or is there a way to update it remotely?


Yeah, the simple renaming .sys files in safe mode does seem like it would inhibit protection.


Renaming it .old or whatever would be what a sane person does.

They recommended deleting the thing: https://news.ycombinator.com/item?id=41002199


While you're at it probably delete the rest of the software. Then cancel your contract.


Then your hardware becomes unprotected, congratulations, you won Cybersecurity award of the year.


There are more options than "you are using Crowdstrike" and "you have no protection".


You're absolutely right, there are more options.

Let's say you're a CISO and it's your task to evaluate Cybersecurity solutions to purchase and implement.

You go out there and found out that there are multiple organizations that tests (simulate attacks) the EDR capabilities of these Vendors periodically and published grades of these Vendors.

You found the top 5 to narrow down your selections and you pitted them in PoC which consists of attack simulations and end-to-end solutions (that's the Response part of EDR).

The winner gets the contract.

Unless there are tie-breakers...

PS: I heard others (and read) said that CS was best-in-class which suggested that they probably won PoC and received high grades from those independent Organizations.


Then sue the shit out of them.


No, the userspace program will replace it with a good version.


I don't mean this to be rude or as an attack, but do you just auto update without validation?

This appears to be a clear fault from the companies where the buck stops - those who _use_ CS and should be validating patches from them and other vendors.


I'm pretty sure crowdstrike autoupdates, with 0 option to disable or manually rollout updates. Even worse people running N-1 and N-2 channels also seem to have been impacted by this.


My point stands then. If you're applying kernel grade patches on machines which you knowingly cannot disable or test, that's just simple negligence.


I think it's probably not a kernel patch per se. I think it's something like an update to a data file that Crowdstrike considers low risk, but it turns out that the already-deployed kernel module has a bug that means it crashes when it reads this file.


Which suggests the question: What's the current state of "fuzz testing" within the Crowdstrike dev org?


Apparently, CS and ZScaler can apply updates on their own and thats by design, with 0day patches expected to be deployed the minute they are announced.


CS, S1, Zscaler etc auto updates and they have to. Thats the point of the product. If they dont get definitions they cannot protect.


Why do they "have to"? Why can't company sysadmins at minimum configure rolling updates or have a 48 hour validation stage - either of which would have caught this. Auto updating external kernel level code should never ever be acceptable.


If you have a 48 hour window on updating definitions, your machines all have 48 extra hours they are vulnerable to 0-days.


But isn't that a fairly tiny risk, compared with letting a third party meddle with your kernel modules without asking nicely? I've never been hit by a zero-day (unless Drupageddon counts).


I would say no, it's definitely not a tiny risk. I'm confused what would lead you to call getting exploited by vulnerabilities a tiny risk -- if that were actually true, then Crowdstrike wouldn't have a business!

Companies get hit by zero days all the time. I have worked for one that got ransomwared as a result of a zero day. If it had been patched earlier, maybe they wouldn't have gotten ransomwared. If they start intentionally waiting two extra days to patch, the risk obviously goes up.

Companies get hit by zero day exploits daily, more often than Crowdstrike deploys a bug like this.

It's easy to say you should have done the other thing when something bad happens. If your security vendor was not releasing definitions until 48 hours later than they could have, when some huge hack happened becuase of that obviously the internet commentary would say they were stupid to be waiting 48 hours.

But if you think the risk of getting exploited by a vulnerability is less than the risk of being harmed by Crowdstrike software, and you are a decision maker at your organization, then obviously your organization would not be a Crowdstrike customer! That's fine.


CS doesn't force you to auto-upgrade the sensor software – there is quite some FUD thrown around at this moment. It's a policy you can adjust and apply to different sets of hosts if needed. Additionally, you can choose if you want the latest version or a number of versions behind the latest version.

What you cannot choose, however - at least to my knowledge - is whether or not to auto-update the release channel feed and IOC/signature files. The crashes that occured seems to have been caused by the kernel driver not properly handling invalid data in these auxilliary files, but I guess we have to wait on/hope for a post-mortem report for a detailed explanation. Obviously, only the top-paying customers will get those details...


stop the pandering. you know very well crowdstrike doesn't offer good protection to begin with!

everyone pay for legal protection. after it happens you can show you did everything, which means nothing (well now this show even worse than nothing), by showing you paid them.

if they tell you to disable everything, what does it change? they're still your blame shield. which is the reason you have cs.

... the only real feature anybody care is inventory control.


Quite a few people in this thread disagree with you though.


you mean their career depend on them claiming ignorance.


You said Crowdstrike doesn't offer protection but there are plenty in this thread that suggested they actually do and seemed to be highly regarded at the field.

Not sure who the ignoramus here...


facts speak more than words. if you cared about protection you would be securing your system, not installing yet more things, specially one that now require you open up several other attack vectors. but i will never managed to make you see it.


Writing software in the safest programming language to develop mission critical product deployed on the most secure and stable OS that the world depends on would be developer's wet dream.

But it's just that... developer's wet dream.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: