Has anyone tried the PoC against one of the anomalous process behavior tools? (Carbon Black, AWS GuardDuty, SysDig, etc.) I’m curious how likely it is that someone would have noticed relatively quickly had this rolled forward and this seems like a perfect test case for that product category.
Depends how closely the exploit mirrors and/or masks itself within normal compression behavior imo.
I don’t think GuardDuty would catch it as it doesn’t look at processes like an EDR does (CrowdStrike, Carbon black), I don’t think sysdig would catch it as looks at containers and cloud infra. Handwaving some complexity here, as GD and sysdig could prob catch something odd via privileges gained and follow-on efforts by the threat actor via this exploit.
So imo means only EDRs (monitoring processes on endpoints) or software supply chain evaluations (monitoring sec problems in upstream FOSS) are most likely to catch the exploit itself.
Leads into another fairly large security theme interestingly - dev teams can dislike putting EDRs on boxes bc of the hit on compute and UX issues if a containment happens, and can dislike limits policy and limits around FOSS use. So this exploit hits at the heart of a org-driven “vulnerability” that has a lot of logic to stay exposed to or to fix, depending on where you sit. Security industry’s problem set in a nutshell.
The main thing I was thinking is that the audit hooking and especially runtime patching across modules (liblzma5 patching functions in the main sshd code block) seems like the kind of thing a generic behavioral profile could get but especially one driven by the fact that sshd does not do any of that normally.
And, yes, performance and reliability issues are a big problem here. When CarbonBlack takes down production again, you probably end up with a bunch of exclusions which mean an actual attacker might be missed.
This kind of thing is quite difficult to do generally, because it is very easy to bypass, has significant performance impact, and has low reliability to boot.
I’m not saying it’s trivial, but in this case couldn’t you basically do W^X and watch process startup for anomalous behavior? Monkey-patching other libraries’ code should be pretty uncommon.
Sysdig released a blog on friday. "For runtime detection, one way to go about it is to watch for the loading of the malicious library by SSHD. These shared libraries often include the version in their filename."
The blog has the actual rule content which I haven't seen from other security vendors
That relies on knowing what to look for. I.e. "the malicious library". The question is whether any of these solutions could catch it without knowing about it beforehand and having a detection rule specifically made for it.
Thanks! That’s a little disappointing since I would have thought that the way it hooked those functions could’ve been caught by a generic heuristic but perhaps that’s more common than I thought.
My experience from working in the security space is that all the tech is pretty un-sexy (with very good sales pitches), and none of it will save you from a nation-state attacker.