#1 - There's always a back door. I did some medical records stuff for a while. I looked myself up, just to confirm for myself how trivial it was to do. Yup, there I was. Which is why I insist that all data at rest is encrypted. (I have yet to win this argument.)
#2 - Our "portal" product had access logs for auditing. Plus permissions, consent trees, delegation. The usual features. Alas. We also had a "break the glass" scenario, ostensibly for emergency care, but was more like the happy path. And to my knowledge, during my 6 years, none of our customers ever audited their own logs.
#3 - My SO at the time worked in a hospital and went to another disconnected hospital for care because she knew her coworkers routinely, illegally looked up patient records, and she didn't want them spying on her.
As an ex-employee, I feel much more confident in Facebook's processes than the company you're describing. Facebook would have no problem terminating people who do what you're describing.
Imagine you are the Egyptian government. You want to squash a social media fueled rebellion, lead by some anonymous person. How hard is it to get one of your bright and loyal minds hired by Facebook? How much data could such a person exfiltrate before getting fired?
The 'We will log your access and fire you' line of defense prevents nothing from someone who only has a job for the purpose of moving data out.
Batch scrapping or drip feeding data via a monitored internal tool? I doubt they’d get much out at all. It’s inherently a low bandwidth and very obvious channel.
Someone in that position would be much better off building a back door into the system. But if they could also build a backdoor into iCloud, or scrape Gmail data from within Google.
I assume that Facebook has mechanisms to check that new hires (especially foreign nationals) are legitimate.
Doesn't matter. At the first hint of trouble you escaped back to your country, protected by the government which had sent you there in the first place, and now the rebels have been murdered thanks to the data you got out.
As an ex-employee could you please also confirm whether or not the average employee is able to access user data, and what kinds of permissions (if any) this requires?
Another ex-FB employee here. I can't believe this is even a thing people are wondering about.
Of course not the average employee can't access user data, it's an immediate firing offense.
> Another ex-FB employee here. I can't believe this is even a thing people are wondering about. Of course not the average employee can't access user data, it's an immediate firing offense.
Ironically, you're undermining your own point. The fact that they would be fired afterwards in no way contradicts the notion that they could access such data, and in fact suggests they can (hence the firing policy).
Yet another ex-FB here. When I was there I think it was possible for engineers to access pretty much anything programmatically, although the vast majority never have any reason to go near the systems that would allow them to do so. During onboarding we were basically told “If you look at any data that’s not yours, assume you will be fired”.
Everything is logged, so if you might have looked at anything you shouldn’t have, it’s flagged and you’re audited; if you didn’t have permission (from a user and/or manager) and a valid business reason, then (we were told during onboarding) you’re likely to be fired and possibly sued.
Thank you for the response. Question: if you (assumed average Facebook engineer for this discussion) observe a bug (normal severity, not something obviously critical and not something conversely trivial) with a particular profile that you cannot otherwise reproduce, and it is determined that addressing it would involve looking at the user's private data, then I assume that would be a valid business reason to do so. Now, is it possible to do this without explicitly (re-)obtaining the user's permission for this incident, or is it assumed the user has already agreed to this somewhere in the ToS or otherwise? And if this is possible, then what stands in the way of someone opportunistically finding bugs that provide convenient covers for looking at user's private data?
The reality is that huge amounts of personal data were harvested by third parties through app permissions - apparently with FB’s knowledge and support.
No one needs back door hacks to get into a vault when the front door is wide open.
Maybe it's irrelevant to you but I'm sure it's mighty relevant to some other users whether they are notified before employees dig into their private data to fix random bugs.
I’m afraid I don’t know the answer. I’m confident that such a thing would be quickly recognised as suspicious, so that sounds pretty far-fetched. Most of the time, it’s someone with moderation powers interacting with anything potentially sensitive; a regular engineer is going to be using test accounts, their own account, or asking someone else to look at the issue for them.
Are you genuinely asking a question you would like to know the truthful answer to, or are you just interested in confirming the strong preexisting bias on display in each of your comments on this story ?
You asked about the "average employee" having access to user data, and the answer is unequivocally "no", with both technical and disciplinary safeguards.
There are only a few roles (moderation) who can access the relevant tools, and while engineers may technically have programmatic access (how would you expect things to work if nobody did ?), this is thoroughly logged and you'd better have an ironclad justification not to get fired on the spot.
No, I'm interested in knowing the truthful answer. It's just that I've received plenty of seemingly truthful responses (both here and elsewhere, e.g. [1]) that seem quite consistent with the notion that an average-employee(-turned-malicious) would be capable of accessing user data, punishments and all notwithstanding.
> You asked about the "average employee" having access to user data, and the answer is unequivocally "no", with both technical and disciplinary safeguards.
(a) How do you know, and (b) so what is your explanation of stories like [1]? They're just hoaxes?
> and while engineers may technically have programmatic access (how would you expect things to work if nobody did ?)
Again you are wording this in quite a vague, lawyer-y manner, which again raises my eyebrows. "May" as in "might", or as in "do"? And "engineers" as in what fraction of them? There is a lot of wiggle room between "nobody" and "all engineers". It's quite strange that I can't get a straightforward, crystal-clear denial to a non-weasel-worded claim from you who seem to be confidently contesting what I'm saying. Please don't keep muddying the waters.
Regarding your question about a dev setting up a test server and accessing live data, that hole has been closed for years. There is some data that an average employee just cannot get to. For some data a dev can access it but the pattern of access and amount of data accessed will be audited and anomalies will raise an alarm.
As for why no one is giving you a clear answer it is because there is no reason for anyone to tell some random person deep details about security policy and procedure. The people building the internal controls and defenses are smarter than you, they know what needs to be protected and are rather devious about thinking up attack scenarios and possible paths of compromise, and eventually get tired of repeating the same answers. Want to know more? Too bad.
> As for why no one is giving you a clear answer it is because there is no reason for anyone to tell some random person deep details about security policy and procedure.
Where did I ask for "deep details about security policy and procedure"?
> Want to know more? Too bad.
No, but thanks.
> There is some data that an average employee just cannot get to.
"Some data" means nothing. I'm sure this is true in many, many companies, ranging from the most competent to the most incompetent.
> For some data a dev can access it but the pattern of access and amount of data accessed will be audited and anomalies will raise an alarm.
I think what you're asking for here you're never going to get. Nobody who works there currently will tell you because they'd get fired (and everyone has bills to pay). People who worked there in the past aren't going to tell you because #1) it's bad practice/bad op-sec/it's uncouth/whatever, #2) if they did it would negatively impact their future prospects and reputation. Nobody has any incentive to hand out definitive numbers or break it down into "X-dev-team #1 has access to X, Y, and Z"
At the end of the day, the data is there - they have it. Possession is arguably MORE than 9/10 of the law in this situation. They can access it whenever they want -- trivially if they are rogue or have no concern for keeping their job. but this is true of just about any huge company that employs a lot of people-- but they're not going to say they can. Why would they?
> Nobody has any incentive to hand out definitive numbers or break it down into "X-dev-team #1 has access to X, Y, and Z"
For goodness's sake, please stop these straw-man arguments. I said this above once, but it seems I have to say it again: nobody ever asked for that level of detail. People have been struggling with far more basic issues. No current or ex-employee or intern has even come along to try to say something simple like "as far as I know, the average Facebook intern simply cannot access private user data regardless of any business reasons"; indeed, we've gotten anecdotes that that the opposite has actually happened. How you suddenly deduce that I'm looking for specific descriptions of what teams can access what data is just beyond me.
I suddenly deduced you were looking for specific descriptions a little ways up this comment tree where you asked the question: "As an ex-employee could you please also confirm whether or not the average employee is able to access user data, and what kinds of permissions (if any) this requires?"
> I suddenly deduced you were looking for specific descriptions a little ways up this comment tree where you asked the question: "As an ex-employee could you please also confirm whether or not the average employee is able to access user data, and what kinds of permissions (if any) this requires?"
That could be answered with something vague like "yes, this requires permissions from a small team of trusted individuals, which are granted only if the issue is severe/cannot otherwise get immediate attention/cannot be addressed by that team/etc., and it's never granted to most interns". No need for jumping to "X-dev-team #1 has access to X, Y, and Z".
Really? That was a pretty specific question, and you were looking for (and would accept) a vague answer? It doesn't matter anyway, again, they have no incentive to tell you that. vague or not vague. Nobody that knows the answer to that question is dumb enough to answer that question (i would hope).
Yes, really. And I don't see why it would be dumb to answer that question, but no need to go on that tangent. If people can't respond then they can live with that being interpreted however it is.
I've read that, for a time, "view anyone's profile" was an advertised perk of being a Facebook employee (maybe just a wink-wink, nudge-nudge thing in an interview, I have no firsthand experience). I'm sure they don't do that anymore, but how much have they really tightened up the ship after having a culture like that?
Facebook has a long history of employees doing sketchy shit like peeking at the profile/timeline of the new SO for their former SO. This has been one of the top threat scenarios internally for more than a decade and they have built significant security infrastructure to protect against this sort of problem. Yes, there is 'always a back door', but that back door has gotten smaller and much harder to find over the years. It is always a possibility, and while the system will prevent attempts to exfil large chunks of data for smaller breaches like this the audits/alarms will probably take a day or so before you are sitting in someone's office with HR present to have a discussion regarding your user data access patterns. So compromise the security infra you say? Yeah, there are other systems watching for that too...
HITRUST CSF is a framework for auditably proving HIPPA compliance. It prescribes controls such as encrypting data at rest. If you have a business relationship with a company which provides you PHI without explicit user consent you must have an agreement (a BAA) with the third party which puts them under the same requirements (backed up with third party audits).
Everything you’re describing sounds like it’s either incredibly fly by night, not in the US, or substantially out of date. If the last two aren’t true, you have a situation that is literally illegal.
I've worked in health care a couple of times now. And while the companies I've worked for have gone well beyond the minimum required for legal compliance, the scary bit really is the sorts of things you could, if you were lazy enough, do and still legally be compliant.
Yeah, HIPPA has some holes you could drive a truck through. I also hate OAuth (so much focus on access, so little focus on what gets done with that access).
Uh huh. We were the first to market with portable electronic medical records. "Fly by night." Sounds about right.
In the USA, there is no way to encrypt medical records at rest and permit data interchange. Because in the USA we do not have universal MRNs (PIDs, GUIDs, whatever). Meaning that if demographic data is encrypted, the system cannot match records across org boundaries, meaning care providers aren't 100% sure they have the correct medical history for the patient, meaning prescription errors, cutting off the wrong arm, misdiagnosis, etc.
Some enclaves like Medicare and VA can encrypt their own data for their own usage, but that protection is moot the moment data is shared with other orgs. It's been a while since I've checked, but I doubt they do encrypt, because that's a bottom up design decision.
Surprise: regulating and legislating doesn’t actually make bad behaviour go away. I too have had the experience of interning at a medical software company where security and patient privacy were a joke.
You might as well connect that whistle to an aircompressor if my experience is anything to go by. Very few companies have their house in order, and healthcare is definitely not an exception to this.
#1 - There's always a back door. I did some medical records stuff for a while. I looked myself up, just to confirm for myself how trivial it was to do. Yup, there I was. Which is why I insist that all data at rest is encrypted. (I have yet to win this argument.)
#2 - Our "portal" product had access logs for auditing. Plus permissions, consent trees, delegation. The usual features. Alas. We also had a "break the glass" scenario, ostensibly for emergency care, but was more like the happy path. And to my knowledge, during my 6 years, none of our customers ever audited their own logs.
#3 - My SO at the time worked in a hospital and went to another disconnected hospital for care because she knew her coworkers routinely, illegally looked up patient records, and she didn't want them spying on her.