He is not trolling. His core point is that there is no sufficient amount of training, or expertise, or monitoring, or punishment, or trying harder the 17th time you've been caught. If you are leaving the decision up to enough/too many humans, then you are by definition providing inferior security.
The real education from this story is far deeper than just Facebook. It is that Facebook employees, and Google employees, and all humans in general are susceptible to this very same "kompromat" concept, and are all susceptible to various forms of influence to greater degrees than our arrogance allows us to admit.
Human beings are attack vectors. Human beings are too self centered to do much about this in any meaningful sense. They can laugh the very idea away too easily.
Reminds me of an apocryphal story (can't find a reference but it appears to be reasonable): FCC was investigating the sale of illegal tv satellite descrambers when they confiscated a unit. Upon investigation, it was found to have been manufactured by IBM! Further investigation revealed it was manufactured at a secure IBM facility used for top-secret ("need-to-know", etc.) type projects. The manager responsible had split the work up such that no single employee there knew what they were building (because they didn't need to know---they just knew enough to do their bit).
I know it's not the same, but this reminds me of that story.
On an only slightly different topic, about 15 years ago, there was a pretty healthy community of people distributing the circuit boards and accompanying software to program DirectTV smart cards. These would unlock all of the channels that "Dave was already beaming at everyone's house anyway", according to the in group parlance used to absolve oneself of such things.
A decent part of that conversation seemed to center around how it seemed highly unlikely that the whole hack was even possible without insider information leading to the development of the tool in the first place.
Fifteen years later, knowing what hacks have been at least claimed to have been pulled off through social engineering, I think the more important take away is that we need to stop portraying the worst case of hacking as a masked man executing some bond villain style hack, because it is fundamentally recommending a terrible heuristic. It by definition casts aside all of the incompetence that is equally likely to cause harm, and in the case of sheer volume, the far more likely scenario to occur.
To be fair, at least at FB (I can't speak to Google or Apple or Amazon):
1. Accessing someone's data when it's not mission critical to your work means you're fired on the spot. This is drilled into new engineers over and over.
2. Privacy-related issues are escalated to the highest severity immediately (on par with data centers being down, etc.). I think the question in this whole debate is where you draw the line for this kind of issue, and what's an issue and what's a feature.
> Accessing someone's data when it's not mission critical to your work means you're fired on the spot. This is drilled into new engineers over and over.
This means they are capable of doing it and are merely punished afterwards, right? Not to mention that I would imagine getting fired in exchange for viewing private data could be quite a worthwhile 'transaction' for some people in some cases.