> FWIW, as a Facebook engineer you have a ton of trainings on how to handle data privacy. And not only is every place where you can touch data actively logged/audited/monitored (this includes DB reads from code, admin tools, etc.), but to access any data you have to explicitly request permission for that specific data.
Really? So are stories like [1] complete lies? Or does someone inside just blindly grant these "explicitly requested permissions"?
You request access, and justify it with something like "I need it to debug issue #123". Someone manually oks/disallows it, and there's asynchronous reviews of these requests to double check. My guess is the intern lied about what they're using it for.
How else would you suggest to do privacy checks like these?
> You request access, and justify it with something like "I need it to debug issue #123". Someone manually oks/disallows it, and there's asynchronous reviews of these requests to double check. My guess is the intern lied about what they're using it for.
OK so an insider can just lie and access whatever they want. Heck, they can even tell the truth! Just find a bug that's exhibited in a particular profile and use that as an excuse to look at the profile.
> How else would you suggest to do privacy checks like these?
Whatever Google does. I don't know the details. But, for starters, my understanding is that their interns generally can't do what you just described, so fixing that would be one obvious step forward.
Facebook's data is very different from Google's. At Facebook you might have a bug that's related to how many thousands of different objects (and their specific properties) interrelate. How could you safely mock that out?
Oh come on. You admit having no idea what Google does either, but surely that must be better than Facebook because you said so, until an FB insider replied and brought down your narrative.
Is it that hard to say "ok well, I stand corrected then" instead ?
> Oh come on. You admit having no idea what Google does either, but surely that must be better than Facebook because you said so
No, you are seemingly deliberately misquoting me. I said I "don't know the details", not "I have no idea". I know enough to feel fairly confident in what I've said. But if you don't believe me you're more than welcome to believe otherwise.
> until an FB insider replied and brought down your narrative. Is it that hard to say "ok well, I stand corrected then" instead ?
Stand corrected about what narrative? Everything I am (and hopefully also you are) reading right here [1] [2] [3] [4] [5] quite clearly says malicious employees can access user data, but will be fired if this is discovered, which is consistent with what I've said. (But don't actually bother replying if you want a response—I have no interest in responding after your comment.)
He is not trolling. His core point is that there is no sufficient amount of training, or expertise, or monitoring, or punishment, or trying harder the 17th time you've been caught. If you are leaving the decision up to enough/too many humans, then you are by definition providing inferior security.
The real education from this story is far deeper than just Facebook. It is that Facebook employees, and Google employees, and all humans in general are susceptible to this very same "kompromat" concept, and are all susceptible to various forms of influence to greater degrees than our arrogance allows us to admit.
Human beings are attack vectors. Human beings are too self centered to do much about this in any meaningful sense. They can laugh the very idea away too easily.
Reminds me of an apocryphal story (can't find a reference but it appears to be reasonable): FCC was investigating the sale of illegal tv satellite descrambers when they confiscated a unit. Upon investigation, it was found to have been manufactured by IBM! Further investigation revealed it was manufactured at a secure IBM facility used for top-secret ("need-to-know", etc.) type projects. The manager responsible had split the work up such that no single employee there knew what they were building (because they didn't need to know---they just knew enough to do their bit).
I know it's not the same, but this reminds me of that story.
On an only slightly different topic, about 15 years ago, there was a pretty healthy community of people distributing the circuit boards and accompanying software to program DirectTV smart cards. These would unlock all of the channels that "Dave was already beaming at everyone's house anyway", according to the in group parlance used to absolve oneself of such things.
A decent part of that conversation seemed to center around how it seemed highly unlikely that the whole hack was even possible without insider information leading to the development of the tool in the first place.
Fifteen years later, knowing what hacks have been at least claimed to have been pulled off through social engineering, I think the more important take away is that we need to stop portraying the worst case of hacking as a masked man executing some bond villain style hack, because it is fundamentally recommending a terrible heuristic. It by definition casts aside all of the incompetence that is equally likely to cause harm, and in the case of sheer volume, the far more likely scenario to occur.
To be fair, at least at FB (I can't speak to Google or Apple or Amazon):
1. Accessing someone's data when it's not mission critical to your work means you're fired on the spot. This is drilled into new engineers over and over.
2. Privacy-related issues are escalated to the highest severity immediately (on par with data centers being down, etc.). I think the question in this whole debate is where you draw the line for this kind of issue, and what's an issue and what's a feature.
> Accessing someone's data when it's not mission critical to your work means you're fired on the spot. This is drilled into new engineers over and over.
This means they are capable of doing it and are merely punished afterwards, right? Not to mention that I would imagine getting fired in exchange for viewing private data could be quite a worthwhile 'transaction' for some people in some cases.
Would not be better to have a tool which automatically creates a/some profile/s similar to that/those the dev needs for debug purposes BUT filling it with fake data? So the bug is reproducible but the users data of them is not accessible to the dev
There's a tool for that, and it's certainly the preferred way to debug. Along with all the telemetry you get, for the vast majority of cases you don't need to touch anyone's data.
Really? So are stories like [1] complete lies? Or does someone inside just blindly grant these "explicitly requested permissions"?
https://news.ycombinator.com/item?id=16675503