Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I feel like someone should also give Zuckerberg the memo that it's only a matter of time before an insider also goes rogue and abuses data access (edit: or otherwise; see below). Facebook fundamentally seems to trust itself way too much, and it worries me that it thinks the only threats are external entities... to me, this is another silently ticking time bomb.

EDIT: And don't forget that going rogue is just one scenario. Another is just a bigger attack surface: the more insiders have broad system access, the more credentials there are that can be phished by/leaked to/stolen by outsiders. Really, it would be completely missing the point of security to have arguments about how exactly insiders' credentials might get compromised.



You'd think so, but most companies have pretty strict internal controls for this sort of thing. Access is also carefully logged so a leaker is pretty much guaranteed to get caught at which point they'd immediately lose their job and likely face criminal prosecution.

With so much to lose and so little to gain internal leaks of this sort are extremely rare.


Who watches the watchers?

#1 - There's always a back door. I did some medical records stuff for a while. I looked myself up, just to confirm for myself how trivial it was to do. Yup, there I was. Which is why I insist that all data at rest is encrypted. (I have yet to win this argument.)

#2 - Our "portal" product had access logs for auditing. Plus permissions, consent trees, delegation. The usual features. Alas. We also had a "break the glass" scenario, ostensibly for emergency care, but was more like the happy path. And to my knowledge, during my 6 years, none of our customers ever audited their own logs.

#3 - My SO at the time worked in a hospital and went to another disconnected hospital for care because she knew her coworkers routinely, illegally looked up patient records, and she didn't want them spying on her.


As an ex-employee, I feel much more confident in Facebook's processes than the company you're describing. Facebook would have no problem terminating people who do what you're describing.


Imagine you are the Egyptian government. You want to squash a social media fueled rebellion, lead by some anonymous person. How hard is it to get one of your bright and loyal minds hired by Facebook? How much data could such a person exfiltrate before getting fired?

The 'We will log your access and fire you' line of defense prevents nothing from someone who only has a job for the purpose of moving data out.


Batch scrapping or drip feeding data via a monitored internal tool? I doubt they’d get much out at all. It’s inherently a low bandwidth and very obvious channel.

Someone in that position would be much better off building a back door into the system. But if they could also build a backdoor into iCloud, or scrape Gmail data from within Google.

I assume that Facebook has mechanisms to check that new hires (especially foreign nationals) are legitimate.


They don’t fire you, they arrest you. And then they find out who you work for.


Doesn't matter. At the first hint of trouble you escaped back to your country, protected by the government which had sent you there in the first place, and now the rebels have been murdered thanks to the data you got out.


As an ex-employee could you please also confirm whether or not the average employee is able to access user data, and what kinds of permissions (if any) this requires?


Another ex-FB employee here. I can't believe this is even a thing people are wondering about. Of course not the average employee can't access user data, it's an immediate firing offense.


> Another ex-FB employee here. I can't believe this is even a thing people are wondering about. Of course not the average employee can't access user data, it's an immediate firing offense.

Ironically, you're undermining your own point. The fact that they would be fired afterwards in no way contradicts the notion that they could access such data, and in fact suggests they can (hence the firing policy).


Yet another ex-FB here. When I was there I think it was possible for engineers to access pretty much anything programmatically, although the vast majority never have any reason to go near the systems that would allow them to do so. During onboarding we were basically told “If you look at any data that’s not yours, assume you will be fired”.

Everything is logged, so if you might have looked at anything you shouldn’t have, it’s flagged and you’re audited; if you didn’t have permission (from a user and/or manager) and a valid business reason, then (we were told during onboarding) you’re likely to be fired and possibly sued.


Thank you for the response. Question: if you (assumed average Facebook engineer for this discussion) observe a bug (normal severity, not something obviously critical and not something conversely trivial) with a particular profile that you cannot otherwise reproduce, and it is determined that addressing it would involve looking at the user's private data, then I assume that would be a valid business reason to do so. Now, is it possible to do this without explicitly (re-)obtaining the user's permission for this incident, or is it assumed the user has already agreed to this somewhere in the ToS or otherwise? And if this is possible, then what stands in the way of someone opportunistically finding bugs that provide convenient covers for looking at user's private data?


FB’s internal security protocols are irrelevant.

The reality is that huge amounts of personal data were harvested by third parties through app permissions - apparently with FB’s knowledge and support.

No one needs back door hacks to get into a vault when the front door is wide open.


Maybe it's irrelevant to you but I'm sure it's mighty relevant to some other users whether they are notified before employees dig into their private data to fix random bugs.


I’m afraid I don’t know the answer. I’m confident that such a thing would be quickly recognised as suspicious, so that sounds pretty far-fetched. Most of the time, it’s someone with moderation powers interacting with anything potentially sensitive; a regular engineer is going to be using test accounts, their own account, or asking someone else to look at the issue for them.


Are you genuinely asking a question you would like to know the truthful answer to, or are you just interested in confirming the strong preexisting bias on display in each of your comments on this story ?

You asked about the "average employee" having access to user data, and the answer is unequivocally "no", with both technical and disciplinary safeguards.

There are only a few roles (moderation) who can access the relevant tools, and while engineers may technically have programmatic access (how would you expect things to work if nobody did ?), this is thoroughly logged and you'd better have an ironclad justification not to get fired on the spot.


No, I'm interested in knowing the truthful answer. It's just that I've received plenty of seemingly truthful responses (both here and elsewhere, e.g. [1]) that seem quite consistent with the notion that an average-employee(-turned-malicious) would be capable of accessing user data, punishments and all notwithstanding.

> You asked about the "average employee" having access to user data, and the answer is unequivocally "no", with both technical and disciplinary safeguards.

(a) How do you know, and (b) so what is your explanation of stories like [1]? They're just hoaxes?

> and while engineers may technically have programmatic access (how would you expect things to work if nobody did ?)

Again you are wording this in quite a vague, lawyer-y manner, which again raises my eyebrows. "May" as in "might", or as in "do"? And "engineers" as in what fraction of them? There is a lot of wiggle room between "nobody" and "all engineers". It's quite strange that I can't get a straightforward, crystal-clear denial to a non-weasel-worded claim from you who seem to be confidently contesting what I'm saying. Please don't keep muddying the waters.

[1] https://news.ycombinator.com/item?id=16675664


Regarding your question about a dev setting up a test server and accessing live data, that hole has been closed for years. There is some data that an average employee just cannot get to. For some data a dev can access it but the pattern of access and amount of data accessed will be audited and anomalies will raise an alarm.

As for why no one is giving you a clear answer it is because there is no reason for anyone to tell some random person deep details about security policy and procedure. The people building the internal controls and defenses are smarter than you, they know what needs to be protected and are rather devious about thinking up attack scenarios and possible paths of compromise, and eventually get tired of repeating the same answers. Want to know more? Too bad.


> As for why no one is giving you a clear answer it is because there is no reason for anyone to tell some random person deep details about security policy and procedure.

Where did I ask for "deep details about security policy and procedure"?

> Want to know more? Too bad.

No, but thanks.

> There is some data that an average employee just cannot get to.

"Some data" means nothing. I'm sure this is true in many, many companies, ranging from the most competent to the most incompetent.

> For some data a dev can access it but the pattern of access and amount of data accessed will be audited and anomalies will raise an alarm.

This is yet again consistent with what I've said.


I think what you're asking for here you're never going to get. Nobody who works there currently will tell you because they'd get fired (and everyone has bills to pay). People who worked there in the past aren't going to tell you because #1) it's bad practice/bad op-sec/it's uncouth/whatever, #2) if they did it would negatively impact their future prospects and reputation. Nobody has any incentive to hand out definitive numbers or break it down into "X-dev-team #1 has access to X, Y, and Z"

At the end of the day, the data is there - they have it. Possession is arguably MORE than 9/10 of the law in this situation. They can access it whenever they want -- trivially if they are rogue or have no concern for keeping their job. but this is true of just about any huge company that employs a lot of people-- but they're not going to say they can. Why would they?


> Nobody has any incentive to hand out definitive numbers or break it down into "X-dev-team #1 has access to X, Y, and Z"

For goodness's sake, please stop these straw-man arguments. I said this above once, but it seems I have to say it again: nobody ever asked for that level of detail. People have been struggling with far more basic issues. No current or ex-employee or intern has even come along to try to say something simple like "as far as I know, the average Facebook intern simply cannot access private user data regardless of any business reasons"; indeed, we've gotten anecdotes that that the opposite has actually happened. How you suddenly deduce that I'm looking for specific descriptions of what teams can access what data is just beyond me.


I suddenly deduced you were looking for specific descriptions a little ways up this comment tree where you asked the question: "As an ex-employee could you please also confirm whether or not the average employee is able to access user data, and what kinds of permissions (if any) this requires?"


> I suddenly deduced you were looking for specific descriptions a little ways up this comment tree where you asked the question: "As an ex-employee could you please also confirm whether or not the average employee is able to access user data, and what kinds of permissions (if any) this requires?"

That could be answered with something vague like "yes, this requires permissions from a small team of trusted individuals, which are granted only if the issue is severe/cannot otherwise get immediate attention/cannot be addressed by that team/etc., and it's never granted to most interns". No need for jumping to "X-dev-team #1 has access to X, Y, and Z".


Really? That was a pretty specific question, and you were looking for (and would accept) a vague answer? It doesn't matter anyway, again, they have no incentive to tell you that. vague or not vague. Nobody that knows the answer to that question is dumb enough to answer that question (i would hope).


Yes, really. And I don't see why it would be dumb to answer that question, but no need to go on that tangent. If people can't respond then they can live with that being interpreted however it is.


I've read that, for a time, "view anyone's profile" was an advertised perk of being a Facebook employee (maybe just a wink-wink, nudge-nudge thing in an interview, I have no firsthand experience). I'm sure they don't do that anymore, but how much have they really tightened up the ship after having a culture like that?


If data at rest is unencrypted, I don't believe you. Sorry. Someone, somewhere is peeking at the naughty bits.

This is the best resource I've found for protecting such things:

Translucent Databases: Confusion, Misdirection, Randomness, Sharing, Authentication And Steganography To Defend Privacy http://a.co/eLgQACC

Maybe differential privacy stuff will supersede, compliment these techniques. I'm keeping an open mind.


Facebook has a long history of employees doing sketchy shit like peeking at the profile/timeline of the new SO for their former SO. This has been one of the top threat scenarios internally for more than a decade and they have built significant security infrastructure to protect against this sort of problem. Yes, there is 'always a back door', but that back door has gotten smaller and much harder to find over the years. It is always a possibility, and while the system will prevent attempts to exfil large chunks of data for smaller breaches like this the audits/alarms will probably take a day or so before you are sitting in someone's office with HR present to have a discussion regarding your user data access patterns. So compromise the security infra you say? Yeah, there are other systems watching for that too...

-Former custodiet of the custodes


HITRUST CSF is a framework for auditably proving HIPPA compliance. It prescribes controls such as encrypting data at rest. If you have a business relationship with a company which provides you PHI without explicit user consent you must have an agreement (a BAA) with the third party which puts them under the same requirements (backed up with third party audits).

Everything you’re describing sounds like it’s either incredibly fly by night, not in the US, or substantially out of date. If the last two aren’t true, you have a situation that is literally illegal.


I've worked in health care a couple of times now. And while the companies I've worked for have gone well beyond the minimum required for legal compliance, the scary bit really is the sorts of things you could, if you were lazy enough, do and still legally be compliant.


Yeah, HIPPA has some holes you could drive a truck through. I also hate OAuth (so much focus on access, so little focus on what gets done with that access).


Uh huh. We were the first to market with portable electronic medical records. "Fly by night." Sounds about right.

In the USA, there is no way to encrypt medical records at rest and permit data interchange. Because in the USA we do not have universal MRNs (PIDs, GUIDs, whatever). Meaning that if demographic data is encrypted, the system cannot match records across org boundaries, meaning care providers aren't 100% sure they have the correct medical history for the patient, meaning prescription errors, cutting off the wrong arm, misdiagnosis, etc.

Some enclaves like Medicare and VA can encrypt their own data for their own usage, but that protection is moot the moment data is shared with other orgs. It's been a while since I've checked, but I doubt they do encrypt, because that's a bottom up design decision.


Surprise: regulating and legislating doesn’t actually make bad behaviour go away. I too have had the experience of interning at a medical software company where security and patient privacy were a joke.


That sucks, but you might consider next time talking to people and seeing if they are working to improve things and, if not, being a whistleblower.


You might as well connect that whistle to an aircompressor if my experience is anything to go by. Very few companies have their house in order, and healthcare is definitely not an exception to this.


Recent news ([1]): Facebook security boss says its corporate network is run "like a college campus"

1. http://www.zdnet.com/article/leaked-audio-facebook-security-...


most companies have pretty strict internal controls for this sort of thing

This does not ring true to me at all.


It was true at Google. It's certainly true at financial institutions. I dunno about Amazon. I'm not sure what other comparisons would be relevant here.

EDIT/NOTE: https://news.ycombinator.com/item?id=16675493


> It's certainly true at financial institutions

It's certainly not true at financial institutions. By financial institutions I mean Fortune 100 financial institutions, as well as smaller financial institutions.

If by "pretty strict internal controls" you mean they can, like Prince Potemkin, point to such things existing in some chimeric form, then yes, I suppose you are right. But in any real sense, no, there are no effective controls in the real world.

About 25 years ago I assumed it was early days for a lot of these things and they would sooner or later be closed up, but they haven't been. Things are wide open - as the recent Facebook/Analytics things have shown. In a very small and indirect way at that.

The first major book on this broad subject was Donn Parker's "Crime by Computer" published in 1976. The book opens by saying that a company's biggest enemies in terms of computer crime is its own employees. This is still true 40+ years later - the biggest enemy of the people who own companies are the people who do the work at them.


> It was true at Google.

Yes, because Google is not your average company. It takes security extremely seriously... in fact it's about as awful of an example as you can give for a blanket statement you made about "most companies".


OK you're right that I overstated when I said "most companies." What I meant to say was most companies of the size and sophistication of Facebook that have a significant amount of private user information. Sorry for not being clear.


> What I meant to say was most companies of the size and sophistication of Facebook that have a significant amount of private user information.

Which is to say... Google and Amazon?


If you add "and notoriety" to that list of qualifiers, I agree with you!


I'm comfortable with such a qualification. Glad we can violently agree. ;-)


That's a really short list of companies, though! There are a lot of companies that each independently hold a ghastly amount of information about random people that have virtually no meaningful controls over this stuff. "No meaningful controls" is the norm.

I'd also say it's the norm among most Fortune 500 non-tech companies.


Yep. Equifax, Experian and Transunion come to mind.


Consider also every large adtech firm.


Even the very largest adtech firms don’t have messenger apps used by millions of people, social graphs of the population or control of large swaths of the internet infrastructure.

That’s not to say I disagree with you, but the data collected is (to me) orders of magnitude less sensitive.

*disclosure: I toil in the adtech mines.


These are great counterpoints to the view I expressed and does make me reconsider my assumptions somewhat. Thanks!


Internal abuse is a big area of effort for Facebook and google but things still go wrong. Here was googles moment for that back in 2010:

https://www.wired.com/2010/09/google-spy/


> Google is not your average company. It takes security extremely seriously

While this is certainly true, you've admitted elsewhere not knowing anything specifically about either Google or Facebook's security process, so how can you compare them ? You seem to just "know" Facebook doesn't take security seriously (which is of course a ludicrous thing to say)


> While this is certainly true, you've admitted elsewhere not knowing anything specifically about either Google or Facebook's security process

You already misquoted me once and I already replied to you. Why do you ignore it and do it again? Like I said: no, I never "admitted elsewhere not knowing anything specifically about either Google or Facebook's security process". You are misquoting me again just like you already did in [1], and it's quite improper that you choose to do this when I have already responded to you and called out your misrepresentation there. If you are looking for a response, see that post. If you are not, then please stop.

[1] https://news.ycombinator.com/item?id=16676704


I am most definitely not misrepresenting you.

People like me or [1] have called you out because you keep contrasting Google and Facebook's internal security processes for no good reason, making definitive assertions like "[Google] takes security very seriously" [2], suggesting that Facebook doesn't and should do "Whatever Google does" [3]. And you're doing this not based on any specific knowledge of what the internal security process looks like at either company, but on your (flawed) perception of what engineering interns might or might not be able to do.

When people like esman1 who actually have that knowledge and context, volunteer to explain to you [4] some of the safeguards in place (and he told you the truth), instead of taking the point, you won't have any of what he says and keep going at it stubbornly.

I think this is the point where reasonable people stop arguing, and anyone else who cares can check your comments in this thread and make their own opinion.

[1] https://news.ycombinator.com/item?id=16675843 [2] https://news.ycombinator.com/item?id=16675508 [3] https://news.ycombinator.com/item?id=16675707 [4] https://news.ycombinator.com/item?id=16675670


I'm not sure if Google even has an internal red team that performs breaches, last time I talked with someone there at a conference they didn't (that was 2016). So I am not sure Google has metrics on how easy it is to gain access by an adversary.


> I'm not sure if Google even has an internal red team, last time I talked with someone there they didn't (was 18 months ago though).

2012: Google staffs up ‘Red Team’

And this was literally just a Google away: https://nakedsecurity.sophos.com/2012/08/24/google-red-team-...


Red team is an overloaded term: "Analyze software and services from a privacy perspective, ensuring they are in line with Google's stated privacy policies, practices, and the expectations of our users." Doesn't sound like adversary simulation to me.


https://careers.google.com/jobs#!t=jo&jid=/google/security-e...

The job even lists insider threat as part of their responsibility.


Yeah, still not the same as actually performing breaches themselves to see how long it takes to compromise, and if they get detected and how long it takes to remediate and evict the adversary. I should have been a bit clearer with what I meant initially.


How do you know there isn’t a team at Google doing this? It’s standard practice at companies of even middling size and Google is so large your friend might just be unaware of it.


A Google security manager told me at a conference when chatting about this in 2016. They were thinking of staffing a breach team, but did not have one then.


I thought Project Zero tries to find vulnerabilities in Google stuff too?


Project Zero is different compared to performing end to end breaches. A breach team might use 0-days of Project Zero to actually compromise Google's internal assets to see if their defenders can detect an adversary. FB has such a team and they gave public presentations (one was at RuxCon 2016) how they compromise for instance their domain controllers and stuff.


Google has a gaggle of security teams, almost all of which occasionally red team and some of which exclusively do. I'm not sure who told you otherwise but they were certainly mistaken. Source: I TL'd a security team there several years ago.


Thanks for pointing this out. I heard it from a security manager at Google at a conference in 2016. Good to hear that they do breach simulations now, besides regular pen testing and stuff.


I'm not sure if there was miscommunication or what, but Google has had teams that do this for a while now. I typically hear them referred to as orange teams.


I worked in 2014-2015 at Google on one of the (many) teams that did exactly that.


I believe it's true of Google! I do not believe it's true in general.


shrug

Without evidence we're both just guessing. Perhaps someone else will chime in with direct knowledge of how FB works.


Evidence suggests it wasn't true at the NSA five or so years back...

It's _probably_ true that things in general have gotten better since then, and it's probably true that they're better at _some_ companies like Google, Facebook, and Amazon - but I'd tend to agree that it's very unlikely to be true for "most companies".


The Snowden case is an interesting example. He went out of his way to get access to information, going so far as to transfer into a role that had more access (I don’t recall all the details but I remember that much). Every company has some category of employee whose job it is to ensure enforcement of policies, for instance, and if these people set out to subvert the system you should expect them to be able to do so. The watching watchers onion does eventually run out of skin (and it’s not even that deep most places).


The right person with the right access can do a tremendous amount of damage. 14 years ago some servers I was hired to maintain (marketing sites for a gambling site out of Costa Rica) were wiped out as part of an inside job: http://boston.conman.org/2004/09/19.1

Who watches the watcher indeed.


I believe it's true of Facebook as well.

Source: I interviewed with their security team once and got a fair idea of how their various security teams are organized.


> shrug. Without evidence we're both just guessing.

Do I understand correctly that you just admitted that your (extremely confident!) factual statement here:

> most companies have pretty strict internal controls for this sort of thing

was actually "just guessing"?


I'm guessing based on:

1) my direct knowledge of similar companies

2) the fact that no large scale leak from internal sources has happened from FB which is evidence that they have at least some internal controls or procedures to prevent one


Unfortunately the internal tech infrastructures of many (not all) financial institutions are a mess of many decades of mergers and acquisitions resulting in a Rube Goldberg like backend of seemingly endless unnecessary complexity and dysfunction with, in many cases, superficial controls around who gets access to what.


I worked at a trio of adtechs. One of which anyone in the industry would absolutely recognize.

There were few effective internal controls. The obstacles to lookups were

1 - all info keyed by cookie. Which users can clear, and is very difficult to get identified. That is, to look you up, I need the cookie from your machine.

1a - most devs are not allowed to run the cluster jobs to look up data. Only on the appropriate teams.

2 - but what about stapling? We required partners to pass us blind uids. Certainly nothing like emails.

3 - no data export. The business is to run ads on the customer's behalf, so there's no way built to export data except targeting lists to the exchanges.


> With so much to lose and so little to gain internal leaks of this sort are extremely rare

I recently downloaded my Facebook archive [1]. If it were legal, I would certainly pay thousands if not tens of thousands of dollars for certain peoples' archives. I can think of several practical contexts in which an unethical actor would find it profitable to pay a Facebook employee a million dollars for someone's Facebook archives.

[1] https://www.facebook.com/help/131112897028467/


I would certainly pay thousands if not tens of thousands of dollars for certain peoples' archives

Really? For what purpose?


> For what purpose?

On the upside, any case where one is engaging in high-value transactions (broadly speaking). Knowing a negotiating counterpart's likes, dislikes, communication style, et cetera can help one avoid mistakes, build a personal connection and draft (and frame) terms correctly on the first try.

More seedily, such information about a political opponent (whether a politician, rival on a commercial or non-profit board, or commercial competitor) is useful.

As a risk mitigation tool, such data would find a natural home in a due diligence file. Prospective executives, board members, business partners, political donation recipients, et cetera expose one to reputational risks. Catching those in advance is already worth tens of thousands of dollars of legal time.

I would hate to live in a country where the above is legal. We should recognize the value of the information every single single Facebook employee has routine access to.


Re-sell to news media for hundreds of thousands of dollars?


I could find several unethical contexts where the same actor is paid a million dollars to kill the same person and the legal framework we live in does nothing to stop this.

Well, apart from post-factum incarceration.


> so little to gain

wait, what?

There's quite a lot to be gained. Enough to incentivize a very powerful attacker, possibly even a nation-state level actor who can extract the mole and protect / reward them.

The stakes are not low here; I can't imagine why you've said that.


In any organization there is some number of people, who, if sufficiently motivated, could work together to pull off a 'data heist' and not be immediately uncovered. At a good company that number is high, at a bad company that number is 1.

What do you think the number is at facebook? At google? At your bank? At your healthcare provider?


> You'd think so, but most companies have pretty strict internal controls for this sort of thing. Access is also carefully logged so a leaker is pretty much guaranteed to get caught at which point they'd immediately lose their job and likely face criminal prosecution.

That's not enough by any means (edit: and as [1] pointed out, I don't even think it's true). There needs to be more to security than mere deterrence. I'm pretty sure at Google, etc. it's simply impossible for a single rogue employee to mess with customer data (except for a few in very privileged positions), and my impression has been that Facebook is not like this at all (unless it has changed recently).

[1] https://news.ycombinator.com/item?id=16675494


That sort of access limitation is what I meant by "pretty strict internal controls."

Having never worked there, I can't speak to how it works at FB but I would imagine that there are a lot of limitations on what rank and file employees can do. I guess I could be wrong. Perhaps someone with direct knowledge will chime in.


> I can't speak to how it works at FB but I would imagine that there are a lot of limitations on what rank and file employees can do. I guess I could be wrong.

Cool, now read this: https://news.ycombinator.com/item?id=16675503

Any changes to your thoughts?


I hope so, but keep in mind Facebook has a magical password that worked for every account for almost a decade.


Source?



I think the previous commenter must have meant "had" as that says as of sometime before the 2012 date on that article the master password allegedly no longer worked.

Still egregious if that sort of early stage stuff hung around that long, but not the same as it being there today.


That password only worked from Facebook's corporate IP addresses.


Unfortunately the criminal charge would be theft of facebook's proprietary information.


Facebook is not most companies, though.


proper companies that have been around for a while and expect to be around for a while do this.

companies that move fast and break things don't give a shit.

I want more companies of the first type and less of the second.


FWIW, as a Facebook engineer you have a ton of trainings on how to handle data privacy. And not only is every place where you can touch data actively logged/audited/monitored (this includes DB reads from code, admin tools, etc.), but to access any data you have to explicitly request permission for that specific data.


> FWIW, as a Facebook engineer you have a ton of trainings on how to handle data privacy. And not only is every place where you can touch data actively logged/audited/monitored (this includes DB reads from code, admin tools, etc.), but to access any data you have to explicitly request permission for that specific data.

Really? So are stories like [1] complete lies? Or does someone inside just blindly grant these "explicitly requested permissions"?

https://news.ycombinator.com/item?id=16675503


You request access, and justify it with something like "I need it to debug issue #123". Someone manually oks/disallows it, and there's asynchronous reviews of these requests to double check. My guess is the intern lied about what they're using it for.

How else would you suggest to do privacy checks like these?


> You request access, and justify it with something like "I need it to debug issue #123". Someone manually oks/disallows it, and there's asynchronous reviews of these requests to double check. My guess is the intern lied about what they're using it for.

OK so an insider can just lie and access whatever they want. Heck, they can even tell the truth! Just find a bug that's exhibited in a particular profile and use that as an excuse to look at the profile.

> How else would you suggest to do privacy checks like these?

Whatever Google does. I don't know the details. But, for starters, my understanding is that their interns generally can't do what you just described, so fixing that would be one obvious step forward.


Facebook's data is very different from Google's. At Facebook you might have a bug that's related to how many thousands of different objects (and their specific properties) interrelate. How could you safely mock that out?


> Whatever Google does. I don't know the details.

Oh come on. You admit having no idea what Google does either, but surely that must be better than Facebook because you said so, until an FB insider replied and brought down your narrative.

Is it that hard to say "ok well, I stand corrected then" instead ?


> Oh come on. You admit having no idea what Google does either, but surely that must be better than Facebook because you said so

No, you are seemingly deliberately misquoting me. I said I "don't know the details", not "I have no idea". I know enough to feel fairly confident in what I've said. But if you don't believe me you're more than welcome to believe otherwise.

> until an FB insider replied and brought down your narrative. Is it that hard to say "ok well, I stand corrected then" instead ?

Stand corrected about what narrative? Everything I am (and hopefully also you are) reading right here [1] [2] [3] [4] [5] quite clearly says malicious employees can access user data, but will be fired if this is discovered, which is consistent with what I've said. (But don't actually bother replying if you want a response—I have no interest in responding after your comment.)

[1] https://news.ycombinator.com/item?id=16675664

[2] https://news.ycombinator.com/item?id=16675503

[3] https://news.ycombinator.com/item?id=16675649

[4] https://news.ycombinator.com/item?id=16675739

[5] https://news.ycombinator.com/item?id=16675968


So, you don't know how google handles this, but you are suggesting everybody should do what google does. Are you trolling?


He is not trolling. His core point is that there is no sufficient amount of training, or expertise, or monitoring, or punishment, or trying harder the 17th time you've been caught. If you are leaving the decision up to enough/too many humans, then you are by definition providing inferior security.

The real education from this story is far deeper than just Facebook. It is that Facebook employees, and Google employees, and all humans in general are susceptible to this very same "kompromat" concept, and are all susceptible to various forms of influence to greater degrees than our arrogance allows us to admit.

Human beings are attack vectors. Human beings are too self centered to do much about this in any meaningful sense. They can laugh the very idea away too easily.


Reminds me of an apocryphal story (can't find a reference but it appears to be reasonable): FCC was investigating the sale of illegal tv satellite descrambers when they confiscated a unit. Upon investigation, it was found to have been manufactured by IBM! Further investigation revealed it was manufactured at a secure IBM facility used for top-secret ("need-to-know", etc.) type projects. The manager responsible had split the work up such that no single employee there knew what they were building (because they didn't need to know---they just knew enough to do their bit).

I know it's not the same, but this reminds me of that story.


[Edit: sorry, never mind. Thanks for the story!]


Unfortunately no. I spent several minutes trying to find a link, but could not find one. That's why I labeled it apocryphal.


On an only slightly different topic, about 15 years ago, there was a pretty healthy community of people distributing the circuit boards and accompanying software to program DirectTV smart cards. These would unlock all of the channels that "Dave was already beaming at everyone's house anyway", according to the in group parlance used to absolve oneself of such things.

A decent part of that conversation seemed to center around how it seemed highly unlikely that the whole hack was even possible without insider information leading to the development of the tool in the first place.

Fifteen years later, knowing what hacks have been at least claimed to have been pulled off through social engineering, I think the more important take away is that we need to stop portraying the worst case of hacking as a masked man executing some bond villain style hack, because it is fundamentally recommending a terrible heuristic. It by definition casts aside all of the incompetence that is equally likely to cause harm, and in the case of sheer volume, the far more likely scenario to occur.


To be fair, at least at FB (I can't speak to Google or Apple or Amazon):

1. Accessing someone's data when it's not mission critical to your work means you're fired on the spot. This is drilled into new engineers over and over.

2. Privacy-related issues are escalated to the highest severity immediately (on par with data centers being down, etc.). I think the question in this whole debate is where you draw the line for this kind of issue, and what's an issue and what's a feature.


> Accessing someone's data when it's not mission critical to your work means you're fired on the spot. This is drilled into new engineers over and over.

This means they are capable of doing it and are merely punished afterwards, right? Not to mention that I would imagine getting fired in exchange for viewing private data could be quite a worthwhile 'transaction' for some people in some cases.


Would not be better to have a tool which automatically creates a/some profile/s similar to that/those the dev needs for debug purposes BUT filling it with fake data? So the bug is reproducible but the users data of them is not accessible to the dev


There's a tool for that, and it's certainly the preferred way to debug. Along with all the telemetry you get, for the vast majority of cases you don't need to touch anyone's data.


Correct. There are many stats relevant to the national discussion that a patriotic Facebook employee might leak. One is the effective CPM (eCPM) rate between the Trump and Clinton campaigns. My hunch is there was a massive disparity there, in favor of Trump. Facebook has only released the "paid CPM" rates, which is suspicious. Most Facebook advertisers look at eCPM, which combines paid + "organic" reach, in other words: the net reach per dollar spent.


Wait till there is a REAL data leak. Just your facebook profile data is 1% of the data they have about you. Using the cookies they have all over the internet as well as partnerships with offline pos transaction systems, they know almost anything you do online/offline. So all websites you have ever visited, things you buy online, the sandwich you buy with your credit card in a local store etc etc. Imagine all that being leaked.


You're assuming it hasn't happened already - all we know for sure is we haven't (yet) had one with Snowden's type of motivation.

(Or perhaps we have, and whichever trusted journalists they've chosen to share with are franticly poring over the exfiltrated data working out how best to angle the story without throwing the whistleblower and/or innocent FB users under the bus...)


Who says it hasn't happened already? How would we ever know? For that matter, how would Zuck?


Anecdotally I've heard of interns getting fired for just looking at profiles (that they aren't actual friends with) even around 5 years ago. So at least they take it somewhat seriously.


> Anecdotally I've heard of interns getting fired for just looking at profiles (that they aren't actual friends with) even around 5 years ago. So at least they take it somewhat seriously.

So the customer's privacy got violated, because interns had blanket access to private customer data. To me that's very much not taking security seriously.


i interned at fb a few years ago. any engineer, intern or not, can access production data. day one you set up an instance of fb on your dev server that you can mess around with, and its connected straight to the prod db. you're able to view anything you want, but they're very adamant that they monitor what you look at.


"its connected straight to the prod db"

I can't believe what I am reading. Why is that? Why use customer data for dev purposes. Why not work on some mock data?


I ask out of curiosity. If you have a P1 escalation due to an issue that is reproducible only in production environment but not with your test environment with mock data, how do you plan to troubleshoot it?


Yes, but you have to explicitly request data every time you access anything. IDK what it was like when you interned, but that's what it's like today.


was not the case when i was there, and it wasn't all that long ago


You and esman1 both could be right. I work at a company of similar size and sophistication as Facebook. Sometimes whether or not you have access to production data by default depends on which team you work for.


> to me, this is another silently ticking time bomb.

I agree. It'll eventually happen to some social app or email provider (think Slack, gmail, facebook, etc) where some huge portion of the database is dumped online -- not through a hack, but through a person willing to do it internally because they can and do not fear or care about the consequences. The Ashley Madison hack was a preview of what's to come.


I would imagine we would hear a lot less information about an internal issue regarding a Facebook employee than an external one as well.


"Our efforts to protect our company data or the information we receive may also be unsuccessful due to software bugs or other technical malfunctions, employee error or malfeasance, government surveillance, or other factors.

"In addition, third parties may attempt to fraudulently induce employees or users to disclose information in order to gain access to our data or our users' data."

"Although we have developed systems and processes that are designed to protect our data and user data and to prevent data loss and other security breaches, we cannot assure you that such measures will provide absolute security."

"In addition, some of our developers or other partners, such as those that help us measure the effectiveness of ads, may receive or store information provided by us or by our users through mobile or web applications integrated with Facebook. We provide limited information to such third parties based on the scope of services provided to us. However, if these third parties or developers fail to adopt or adhere to adequate data security practices, or in the event of a breach of their networks, our data or our users' data may be improperly accessed, used, or disclosed."

Source: MD&A, 2015 Facebook annual report


To anyone who actually reads annual reports on a regular basis, this is a copy/paste for basically every single tech company, on every year's annual report

There's like 50 pages of this stuff that covers literally every possible scenario in case of legal liabilities. Has no meaning whatsoever


Honestly that is all just legal boilerplate that could be found in the annual report of any public internet business.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: