Hacker News new | past | comments | ask | show | jobs | submit login
FTC Imposes $5B Penalty and Sweeping New Privacy Restrictions on Facebook (ftc.gov)
414 points by vonmoltke on July 24, 2019 | hide | past | favorite | 210 comments



A lot of the terms of the settlement deal with organizational changes, but when you look at the very specific privacy restrictions that Facebook agreed to, it's really, really problematic stuff:

* Facebook is prohibited from using telephone numbers obtained to enable a security feature (e.g., two-factor authentication) for advertising;

* Facebook must provide clear and conspicuous notice of its use of facial recognition technology, and obtain affirmative express user consent prior to any use that materially exceeds its prior disclosures to users;

* Facebook is prohibited from asking for email passwords to other services when consumers sign up for its services.

This might, at first blush, seem like a win for users, but consider it from a different perspective.

When a settlement involves promising not to do a bunch of really, really bad stuff, that typically means the settlement doesn't have much bite. It's comparatively easy for a company to agree not to do egregiously bad things, and comparatively harder for it to agree not to do things that are still problematic, but less egregious.

These things that Facebook has agreed to not do are way outside the boundaries of ethical privacy practices -- but there's still a ton of gray area that remains unaddressed.

So from Facebook's perspective, this is probably a big win. The settlement allows the company to give the appearance of taking concrete steps to respect users' privacy. But what it might actually mean is that there were many other less egregious privacy violations that it was able to slip past the FTC's judgment.


> prohibited from using telephone numbers obtained to enable a security feature (e.g., two-factor authentication) for advertising

Facebook is not alone in misuse or wrongful use of phone number given for 2FA. LinkedIn explicitly requires phone number to be added on the profile to enable 2FA and makes the phone number visible by default to all the contacts, if you don't want your phone number visible; you'll have to loose 2FA as LinkedIn doesn't support authenticator or other alternate 2FA means(FB does).

I came to know this as after I enabled 2FA on LinkedIn, I started receiving messages from random people on WhatsApp whom I later found to be my LinkedIn contacts.


Yesterday I went to set up 2FA for MongoDB and entered my phone number in the form but then realized I could use Authy so I never hit save/submit. I’ve already gotten two text messages from them even though my account shows my phone number as “Not Set.”

It really made me wonder what other shady stuff they might be doing with my and my customer’s data.


The same thing happens on many retail websites when you choose to check out as a guest or begin account creation and cancel. I automatically boycott any website that follows these dark patterns.


What's the /dev/null of phone numbers?


Occasionally I'll send the most relentless phishermen the number of a local FBI office


4158675309


Rejection Hotline (605) 475-6968


Mongodb does this? Does that affect how trustworthy they are for hipaa compliant services?


Based on that phrasing, I feel like they could just reword the way they ask for the number. Instead of "Enter your phone number to enable 2FA" it would say "2FA can not be enabled without a phone number associated with your profile" and leave it at that. Then on the profile page where you enter you phone number it just list various "benefits" of giving them your phone number "friends can use it to find you", "faster support times", "ability to enable 2FA". Then they can claim they don't know the reason you added the number was just for 2FA.


That’s a smart point. Yes they could probably do that.


I'm not surprised to hear this at all. LinkedIn is probably the scummiest tech company of reasonable size, and always have been. Facebook only gets more attention because they're much bigger and more powerful, so people notice their constant toeing of the boundaries while ignoring LinkedIn blithely leaping across them.


Your comment prompted me to check my LinkedIn 2FA and it turns out that they now support TOTP apps (they list Authenticator App as the option). I believe it is very new though.


You are correct. Authenticator is now available, I used to check every month and I didn't before writing this comment and resulting in stale information.


Where do you see that? I still only see the option for SMS 2FA.


On desktop I saw it under Settings & Privacy -> Two-step Verification after clicking "Change verification method" (previously had SMS setup).

Here is a direct link: https://www.linkedin.com/psettings/two-step-verification


Odd, on my desktop view, there is no option to change the method. It's SMS or nothing.


Omg this is gross. I definitely do not want random recruiters having my phone number...


Definitely not. Google does (did?) it, too. It may have stopped when GDPR went into effect.


Did Google show the phone number to contacts when 2FA was added?


Some of the organizational changes are substantial. Having an independent privacy force review all existing and new product features for privacy implications can be a huge drain on velocity, especially if they demand a lot of changes.

But really, this is how organizations should be run: there should be a user-privacy-focused resource inside the company that can work with product to guide how features get implemented. (The same goes for accessibility, technical feasibility, and other things that product people aren't necessarily measured against.)


>Facebook is prohibited from using telephone numbers obtained to enable a security feature

This is exactly what made me quit Facebook a while back. Started to spam me with notifications when I only wanted to add a backup number.


pushier than a drug dealer.

oh wait theyre dealing ankind of drugs too . .


There's the pesky implication that Facebook was perhaps doing all that stuff. Making them stop is a win for users.


It definitely was.

- Asking for email password: https://arstechnica.com/information-technology/2019/04/faceb...

- Targeting ads using phone numbers obtained through two-factor authentication: https://www.engadget.com/2018/09/28/facebook-two-factor-phon...

- Improper use of facial recognition: https://www.npr.org/sections/thetwo-way/2018/04/16/603056662...


It's not an implication; it's mentioned in the article:

>the FTC alleges that Facebook violated the FTC Act’s prohibition against deceptive practices when it told users it would collect their phone numbers to enable a security feature, but did not disclose that it also used those numbers for advertising purposes.

>The FTC also alleges that Facebook misrepresented users’ ability to control the use of facial recognition technology with their accounts. According to the complaint, Facebook’s data policy, updated in April 2018, was deceptive to tens of millions of users who have Facebook’s facial recognition setting called “Tag Suggestions” because that setting was turned on by default, and the updated data policy suggested that users would need to opt-in to having facial recognition enabled for their accounts.


> So from Facebook's perspective, this is probably a big win.

$5 billion is a lot, but Facebook is huge. Even considering, it's probably not pocket change for them, but they are getting a benefit from it for sure. They are buying public opinion.

"Look at us..we took a big fine and agreed to respect your privacy .. you can trust us now." It's essentially a very expensive ad, used to rebuild trust with their cattle; I mean users.


The independent party for privacy review is a much bigger deal than the fine. This out of Mark Zuckerberg's control now, they will be audited constantly for any product changes they make.

"As part of Facebook’s order-mandated privacy program, which covers WhatsApp and Instagram, Facebook must conduct a privacy review of every new or modified product, service, or practice before it is implemented, and document its decisions about user privacy. The designated compliance officers must generate a quarterly privacy review report, which they must share with the CEO and the independent assessor, as well as with the FTC upon request by the agency."


At FB level, this is already a requirement for GDPR. The only difference is that it must be shared with an independent assessor rather than held strictly internally.


That seems like a pretty huge difference.


It's not that big. FB gets to choose the assessor.


"Fined $5B by regulators" doesn't usually have the PR effect of making a company seem trustworthy. It gives the company the reputation, validated by regulators in the eyes of the public, that they were caught doing something egregious and will likely do it again if they think they can get away with it.

I've never heard someone express the point you're making: can you think of any example where the PR impact turned out that way?


The biggest thing that makes the 5 billion fine a bargin:

The FTC settlement indemnifies Facebook for "any and all claims prior to June 12, 2019"


Does that mean claims already raised by then or claims pertaining to actions earlier than then? The former seems like a bad idea and the latter seems like it shouldn't even be possible.


The main goal of the settlement is to limit future egregious acts by the organizational changes and not by specific prohibitions. The FTC knows that specific prohibitions will never cover all past and possible future acts.

The only reasons these specific prohibitions are spelled out is because Facebook is known to have specifically performed these acts in the past.


> So from Facebook's perspective, this is probably a big win.

It does feel like they confessed and repented a few spectacularly egregious sins and bought a $5 billion indulgence for the rest.


Seems like this is a win for consumers too if they adhere to the conditions, no?

If they agree to not do bad things, then do the bad things, seems then we have a bad actor.

I doubt that any individual or lawsuit could affect this kind of change.


I like this trend of using agency settlements to force organizational changes of companies that have no way of changing the organizational hierarchy

they should get even stricter

force them to relinquish the absolute control of the founder/ceo, enable voting rights and more

investors are undiscerning and exchanges are not creating ultimatums for listing, so these founder privileges are only revoked after they mess up really bad which is cool


Plus they'll continue to do all these things with users from other countries.


I agree it's not enough but it is something. They can still be fined again for less egregious things,the potential risk of an even bigger penalty still looms.


So basically a defanged, company-specific halfway-there poor man’s GDPR. Still a win though.


It seems like a pretty big repeat of the 2011 settlement. That's also when big statements such as "Facebook's privacy practices will be monitored for the next 20 years by the FTC" proved to be absolutely meaningless.

Whatever privacy review Facebook was undergoing like once a year was done by a private firm that Facebook could pick and then have to pay. So is it any wonder that those so-called privacy auditors have found nothing wrong with Facebook so far in 8 years? (even as the privacy scandals and Facebook's apologizing perpetuated in the media).

These types of settlements are a joke. There's zero doubt in my mind Facebook already has a plan for how to work around these restrictions and do 90% of what it's already been doing anyway.

As a side-note, does anyone still believe Facebook was genuine about getting Facebook female users' naked pictures "to protect them from revenge porn" genuine?

They've already lied about collecting the phone numbers and facial recognition as being used for security. And those were obvious lies to me from day one, too. Pretty much every new feature Facebook launches that seems good for users has an undertone of "...and this is how we'll track you now."


There is a lot of misunderstanding about how the FTC regulatory process works in cases like this, outside of securities and other financial instruments, the FTC has very little power to force legally binding fines upon companies.

What they do is issue settlements, Facebook agreed to this fine, and to the conditions surrounding it, because if they didn't they might face hard, law based regulation from Congress.

The issue here is, the FTC process relies on good faith from the companies it settles with, and Facebook has repeatedly shown to be a bad faith actor in this regard (which is why its a 5 billion dollar fine to begin with)

The only way this is going to end is with very onerous regulation from Congress.


It's completely ridiculous that the FTC is in a position where it has to impose restrictions like this in an ad-hoc case-by-case basis. In this case specifically to Facebook. Maybe I'm being naive, but if these activities are violations, there should be no need for facebook to agree to abide by them. They would have to anyway. Alternatively if these are specific restrictions on facebook, doesn't it imply it's ok for other services to do them?

The GDPR has it's flaws, but at last Europe has bitten the bullet and faced up to their responsibility to protect consumers and reign in privacy abuses. It's about time US lawmakers got their act together.


We should separate what something does from its intended effect when having these discussions. Laws exist on the books, laws can be added, etc but that is unrelated to enforcement and credibility and continued sector/business growth.


> So from Facebook's perspective, this is probably a big win.

being fined 25% of annual profit (2018 $22.111bn) is probably not "a big win".

But yeah, the business terms of the settlement are surely inspiring a champagne toast back at HQ.


Does it bother anyone else that Facebook is being fined $5 billion while Equifax, a company with a history of anti-consumer activity and privacy violations going back decades, the company that congress was forced to enact the Fair Credit Reporting Act to contain, and who gave away all the personal information for over a hundred million people through its own negligence, was fined a mere $650 million? Do you get the idea that this is less about consumers and more about Zuck blowing off Congress?


Came here to say this! I still can't believe credit bureaus are allowed to even exist. At no time do I give them explicit permission to collect my (very) personal information, there's almost literally no way I can opt out (short of not participating in the modern financial system), their interpretation of this data has an outsized influence on my quality of life, AND they have proven themselves incompetent at protecting this data!

If I'm worried about Facebook's use of my data the answer is simple: stop using Facebook.


>If I'm worried about Facebook's use of my data the answer is simple: stop using Facebook.

Don't forget to stop using every website that embeds facebook buttons (or block the scripts). It's not exactly easy to prevent facebook from gathering information either.


Thankfully it's pretty easy to use something like uMatrix to block all requests to their domains. And I believe with a few extra lists toggled on the settings page, uBlock will also probably block requests to all or nearly all of their tracking/ad domains.


Firefox's Facebook container extension and Privacy Badger extension makes this easy to do.


"Easy". The real problem is that most people don't even know it is necessary to do. Opting out of Facebook's data collection is easier than opting out of the credit agencies' but it is decidedly more involved than just not using their products.


Sure, but in the context of this conversation, googling "stop Facebook from tracking me" and installing a couple extensions is infinitely easier than the solution to the abuses of credit agencies, which is... what, opting out of a significant part of the economy?


just log out and wipe your cookies/localstorage.


This is insufficient; they fingerprint your devices and collect data on your habits even if you are logged off (not to mention cross reference with their other services). Use effective blocking extensions with up to date blocking lists to block their endpoints entirely, and then get on your knees and pray your friends and family don't share too much information about you without your consent.


You did give them explicit permission by participating in the modern financial system.

That's why I keep saying all the hypothetical privacy concerns with Facebook's Libra already are reality with the current financial system. Equifax also didn't get fined because of what they were doing, but because they were negligent in handling the data.

Having said that, given the importance of consumer debt in the US economy, credit bureaus are indispensable. They aren't going away any time soon.


You say that as if it already happening makes it okay. Libra is getting hounded because the flaws are visible, frankly because people are suspicious of Facebook now. But the credit agencies doing similar things that we consider unethical, but operating in the background and hidden, doesn't make it right. I remember my mother telling me as a child "just because someone else does something doesn't mean you should." I'm pretty sure many mothers told many children that, but it seems many forgot.

What is happening to Facebook can be good for everyone. Because if the FTC cracks down on them then it sets a precedent to crack down on the credit agencies. Clearly people will go after them if that happens.

Sure, this should never have been an issue, but just because someone else is currently doing something doesn't mean that it ever was the right thing to do. Even if legal. Let's stop this BS "but they do it" and change the conversation to "they're doing it to, let's make sure they also are stopped."


> You say that as if it already happening makes it okay.

I'm not saying "it's okay". It is what it is. I'm saying we're applying a double standard. It's like complaining about lighting a match in the middle of a wildfire.

> Because if the FTC cracks down on them then it sets a precedent to crack down on the credit agencies.

That makes no sense me. They can crack down on Libra for any number of reasons. They're clearly okay with what the credit bureaus are doing right now, why wait for Facebook to come along and do the same?


> It's like complaining about lighting a match in the middle of a wildfire.

It's like complaining about lighting a match in a dry area while there is a fire somewhere in California.

FTFY

The analogy is bad because the match in the middle of the forest fire won't affect the fire. You're already engulfed in flames. What we're taking about is two different sectors. They are disjoint (unlike your analogy). But you can be in Oregon while it is dry and see that someone started a fire in California with a match and say to your friend "hey be careful with that match. That's how the fire started in California. We don't want that to happen here."

> That makes no sense me.

1) just because it doesn't make sense to you doesn't mean it's incorrect.

2) law works highly off of precedence. So one ruling affects new ones that come in. They keep try to be consistent.

3) just because people/laws are "okay" with something now doesn't mean it can't or won't change. In fact, I'd argue that's why new laws get made.


> What we're taking about is two different sectors. They are disjoint (unlike your analogy).

I obviously disagree. It's the essentially the same sector (consumer finance). Some of the involved companies (Visa, Mastercard, Paypal) are core players in that sector.

> 1) just because it doesn't make sense to you doesn't mean it's incorrect.

Indeed, it just means you failed to connect premise, analysis and conclusion to meet my personal standards for a sound argument. It might be incorrect too, but I'm not in the position to demonstrate that.

> 2) law works highly off of precedence. So one ruling affects new ones that come in. They keep try to be consistent.

That argument works the other way around too: It would be inconsistent to turn a blind eye to credit bureaus for all this time, but now that Facebook comes along, suddenly it's a problem? Why? Just because it's Facebook?

> 3) just because people/laws are "okay" with something now doesn't mean it can't or won't change. In fact, I'd argue that's why new laws get made.

Again, this argument works both ways. Just because people aren't okay with something doesn't mean it can or will change. You are not adding any new information to your argument.


>You did give them explicit permission by participating in the modern financial system.

That sounds like implicit permission at best.


This has the same flavor as the folks who say, "If you don't like some law XYZ, just move to a different country!"

If you don't want Equifax to have your private information, just don't participate in the modern financial system!


> You did give them explicit permission by participating in the modern financial system.

This at best would qualify as implicit consent. Which is very different and not sufficient. There should at the very least be a big button you have to press for every agreement you enter into that shares data with credit bureaus that clearly states that you: 1) consent to sharing all of your transaction data with credit bureaus 2) are aware of the various ways the credit bureaus might use your information 3) are aware of the potential consequences in practical terms of how this usage might impact your life 4) are informed about the way the data is stored/who has access to it/what your rights are surrounding it

I would also like to see GDPR-style "right to be forgotten" laws that apply to this data.


Coincidentally, since you brought up Libra: I actually think one of the biggest bull cases for crypto in general (not necessarily Libra) is solving this exact problem. You have an immutable record of all of your transactions that is not explicitly tied to your identity (just a public key). The role of a credit bureau when you apply for credit then would be to ask you to prove ownership of (one or many) public key(s) that they can then run algorithms over to assess your creditworthiness. They don't have to custodian the data at any point. They can be compelled to forget that you are the owner of a particular public key. The end user has full control over how their data is used. And creditors are still able to assess creditworthiness. Win-win-win-win.


> This at best would qualify as implicit consent.

It doesn't, I'm just using your umbrella term of "participating in the modern financial system". Read the terms of service you agreed to for the various financial services that you use. You'll find something akin to "we can share (aggregates of) your data with our partners". This is explicit consent.

> There should at the very least be a big button...

Fair enough, but I don't think it would really change anything. After all, what's the alternative to "participating in the modern financial system"?


>If I'm worried about Facebook's use of my data the answer is simple: stop using Facebook.

This doesn't stop them using your data obtained from other users.


Yes this is insane. Facebook didn't even have a security issue. They just allowed my friends to export the fact that I listen to the Goo Goo Dolls (crazy how the media spinned this). Also, I signed up for Facebook, Equifax on the other hand keeps data on me without my permission. They had a real data breach, and they lost important data, not just Facebook likes.


That's downplaying it a bit.

FB allowed the exporting of all sorts of data, from your relationship status, demographic and contact information, information on employers, interpersonal relationship graph to companies who used it to mine information on you without your explicit knowledge (you could refuse them access, but as soon as one of your friends allowed it, your refusal became moot).

Then it built a whole bunch of tooling and ecosystem around this industry. Then it made a lot of money from it.

Then it played dumb. "We didn't know", "We don't allow it, even though we built tools for no other purpose than to facilitate it", and so on.

A little more nefarious than "your friends can tell someone you listen to this band".


The fine is 5 billion dollars because Facebook flagrantly violated the 2011 agreement with the FTC, as bad as Equifax is they haven't essentially gone "so what are you gonna do about it" to the regulators.


What is the proportional hit though? You need to compare the percentage/ratio of the fine to the companies net worth and also to their annual revenue and profit margins.


It should be based on no. of user impacted and avg. impact to each user. Revenue/Margin are should not be primary driver. We don't convict murderer/thief depending on their net worth (if any it is tilted on wrong side).


There is a greater impact to society than measurable direct impact to users. Facebook should be liable for their total net externalities, divided by the probability of being caught for such actions. That is the economic premise of the FTC. This should include fines and remedies for anti-trust violations as well, and since this is an opportunity cost, it does scale with the size of the company.

In fact you should probably subtract the direct costs to users, as there is already a mechanism in place for collecting damages, and any government intervention on this axis should go to something reminiscent of a victim's fund.


This makes no sense. Violating the law should not be a business expense, it needs to hurt. If Facebook was a person the remedy might be jail time, but you can’t imprison a business.


Some jurisdictions do (in Europe, for example).


Just because company is smaller, punishment should be lower? Impact of both leaks on average joe is much different, and size of penalties have it reversed


Look at it from a different angle. The fine to the company should be big enough to make it painful to continue the behavior in question. As such, the bigger the company, the bigger the fine.


It bother's me there are worse enemies.

I'm still happy to see our lesser enemies be curtailed.

Just as it bother's me that there are kids starving in africa... but I still donate to my local food bank. A worse evil doesn't mean we can't improve on other evils.


Sure but it does indicate an imbalance in the justice system.

Remember when a gram of crack got you the same prison time as 100g of cocaine?


Could there be even better and widespread reform? Objectively, yes.

Is this a step in the right direction? IMO, yes.

To be clear, I dislike the FTC. But I believe power is corrupting in itself, so ipso facto the powerful will be corrupt or at least soon corrupted. That's why I want to see checks and balances. The guards of the guards... all lined up in a circle ideally. We aren't there, agreed. But I'll take any step toward that as a win.


Intent matters. Equifax's breach was a breach - they were attacked. They didn't intentionally post personal information publicly. It was behind a really shitty lock, but it was behind a lock. Facebook's was not, they knowingly & willingly did the things they are being fined for doing.

It bothers me that Equifax exists and is as shitty as it is. But it doesn't bother me at all that intentional maliciousness is punished harsher than incompetent negligence. The degree of the crime matters, too, of course, but this wouldn't be entirely dissimilar to the idea that homicide is a harsher sentence than involuntary manslaughter.


Nope. Because fines are also based on companies' capacity to pay unless idea is to shutdown company and have majority or all employees jobless on street. So fines should be big enough to cause pain but not to cripple the company from operating.


Equifax shouldn't operate, especially after all that. Atleast, with facebook, for the most part, you willingly give information. You can't do anything without data getting submitted to equifax.


One involved mishandling of user data, the other involved misuse of user data.


Thinking of this in a proportional sense is the wrong comparison. It's a settlement - therefore it needs to be compared to how much damage a non-settlement was going to cause FB.


The head of the FTC voted against it and gave his reasoning here:

https://twitter.com/chopraftc/status/1154010756079390720

One that stood out for me is complete immunity for for the execs for "known" and "unknown" violations:

"Mark Zuckerberg, Sheryl Sandberg, and other executives get blanket immunity for their role in the violations. This is wrong and sets a terrible precedent. The law doesn’t give them a special exemption. The settlement fine print gives Facebook broad immunity for “known” and “unknown” violations. What’s covered by these immunity deals? Facebook knows, but the public is kept in the dark."


Rohit Chopra is not the head of the FTC: he's one of the five FTC commissioners [0].

[0] https://en.wikipedia.org/wiki/Federal_Trade_Commission#Curre...


You're right. Thanks. I misunderstood what "Commissioner" meant.


"Facebook knows, but the public is kept in the dark."

What stood out for me were the two dissenting statements by Commissioners Chopra and Slaughter.

https://www.ftc.gov/system/files/documents/public_statements...

https://www.ftc.gov/system/files/documents/public_statements...

Chopra disagreed with the decision not to charge the `bergs and noted that in the Cambridge Analytica case the former CEO was charged.

Slaughter believes that there was sufficient evidence to name Zuckerberg in a lawsuit.

It is possible that Facebook fears Mark Zuckerberg being subjected to court-ordered discovery and put under oath. He came very close in a shareholder lawsuit some years ago and Facebook settled on the day before he was to testify.


Law enforcement leverages social media in a major way. They have customized software that interfaces with all the major environments and ties them all together.

It makes their jobs (LEO) much easier. They are not going to shit where they eat.


Yep.

That's exactly the violations I figured they were getting immunity from. I mean, let's be real, they're not going to undo any of that, and they likely want to make sure that any new social nets set up the same facilities for them.


No different than the broad immunity granted to legislators and top executives in government who authorize and oversee the worst abuses of government.


this needs to be at the top


I don't think I agree with Facebook getting a bigger fine than Equifax. I put my own information on FB. Equifax compiled information on me without my explicit consent. Both sets of data got into hands they shouldn't.

I realize that misses FB willfully doing what they did and Equifax not intending to be hacked, but for me my consent is equal or more important than what the holders of the data did after they had it.

Let's say I send a saucy picture to a friend and s/he shoots it all around to show off his/her banging bodied boyfriend (that being the "profit" factor). Now let's say a stranger finds my lost phone, gets the picture off the SD card with intent to use it for some personal gain, and through some series of events that gets leaked. Who did worse, the friend sharing the data I gave to them, or the stranger who got it without permission? Missing from this is Equifax had a lot more sensitive information than I put on FB.


> I don't think I agree with Facebook getting a bigger fone than Equifax.

The issue is Equifax not being fined adequately. Let's maybe not use that as the bar?


I don't disagree and didn't mean to imply that.


Are you assuming here that Equifax could absorb any size of fine?

Also, you seem to be OK with the idea that any company that gets hacked could be fined $5 billion. As being unhackable is an unachievable standard, that effectively means the FTC could bankrupt almost any company on a whim. That would be huge power in their hands and would not magically stop exploits from happening.

I think there's a big question about whether Equifax should have been fined at all. It would appear to either require mass inconsistency by regulators, or would put most US companies that rely on IT out of business.


I think you have a tendency of assuming a little bit too much about what others think based on 1 sentence.

> Are you assuming here that Equifax could absorb any size of fine?

Who said any size of fine would be appropriate? There is a lot of possible fine size between 500M$ and the max they could absorb.

> Also, you seem to be OK with the idea that any company that gets hacked could be fined $5 billion

Where did I say that? I am OK with facebook getting fined 5B$ in this context, that doesn't tell you anything about other hacks and other companies. I am also a little bit reluctant to call facebook's case a "hack".

> I think there's a big question about whether Equifax should have been fined at all. It would appear to either require mass inconsistency by regulators, or would put most US companies that rely on IT out of business.

You should maybe think outside of the tech bubble for one second? What you find apparently unthinkable is already in place in many other industries. Do you think there will be no repercussion for Boeing's crashes if it was caused by their carelessness? What do you think happen if an engineering company builds a bridge and it collapses because of a design mistake? Yet mistakes are human right?

Private data is something that should be protected. It is not as important as human lives, but it is very important. If you build a business around handling user's private data, but can't be bothered to properly protect them, then yes you should get fined heavily or even put out of business.

And just like in engineering there should be investigations into what happened to determine how much of it was pure carelessness and how much could not have been realistically prevented. In the case of Equifax, they didn't even bother applying security patches to their external facing software.


The airline industry is heavily regulated and the software industry isn't. Will Boeing be punished if they're found to have made a mistake - politically it's an absolute certainty, legally I presume it'd depend on whether there was an element of knowing to it, or whether all parties genuinely believed they were doing the best thing for safety.

But despite how tempting it is to punish people who make mistakes, it's generally understood that incompetence is not illegal and should not be. Criminalising incompetence just makes everyone a criminal and hands absolute power to prosecutors and regulators: a scenario warned against many times by students of history.


> Who did worse, the friend sharing the data I gave to them, or the stranger who got it without permission?

I honestly don't know which answer here is supposed to be the obviously correct one.


I feel like the patent is suggesting the theft is the worse one. But I feel like almost everyone would agree that the friend, who you willingly gave the picture to, is the worse person because they betrayed your trust.

I just don't get the analogy.


I think the idea is supposed to be that you have some agency when you share your data with a friend. You can choose whether you want to entrust your data to this friend or not, and the onus is on you to properly vet them for transparency and trustworthiness. Conversely, if a stranger leaks your data, that isn't on you at all (beyond you allowing the stranger access to your data, but that's stretching the metaphor a bit). I don't find this argument particularly compelling though, because the typical Facebook user definitely didn't know what they were getting into when they signed up years ago. I find it hard to fault people for improperly vetting entities that are beyond their area of expertise, and I don't think this mistake on the user's part makes Facebook's practices any less egregious.


There's a real question about consent here. I know a lot of techies that don't understand ML that well (not even better than the general public). I saw a comment in another form "ML is like QM, if you think you understand it you don't.", And I think that's fairly accurate and Feynman would have loved it. So

1) how can you give proper consent if it takes a great deal of expertise to understand what you're getting yourself into (never mind that complexity doesn't come across, but that's a good question too. About conveying complexity).

2) can consent even be given if we can't be fully informed? Or rather how informed do you need to be to give consent?


Facebook also tracks and keeps profiles of people that are not users of the service, have not given their consent, and have not willingly intended to provide any information to Facebook.


Consider also that Facebook's current market cap (the value of the company according to the stock market) is approximately 34 times the current market cap of Equifax ($575 billion vs $17 billion).


As a Canadian I'm frustrated our agencies aren't seeking compensation on behalf of affected consumers or taking more aggressive steps to hold Equifax accountable.

Frankly, I'm surprised and disappointed nobody here has started a class action to go after both Equifax, as well as the companies who shared my data with them without my explicit consent.


I think regrettably, y'all aren't big enough to make it happen, and doubly so with a protectionist president in the US. Being part of the EU I think is a big win for European countries on this front -- always stronger as part of a gang.


Facebook generates a shadow profile on you without your consent.


All credit agreements I have read contain sections specifically allowing reporting of information to credit agencies, so you have in fact explicitly allowed collection of your information.


That's irrelevant. Facebook's stuff is their TOS also. The point is that at some point the TOS is unconscionable.


> Equifax compiled information on me without my explicit consent

I'm not disagreeing with this, but my understanding is that in the UK I'm always asked explicitly before sharing my data with credit agencies, and have been as long as I can remember?


Even if you are protected by a regulation that requires that permission, would anyone be surprised to discover that firms like Equifax have failed to obey that regulation? They constantly fail to do required things that don't happen to contribute to profits.


> I put my own information on FB. Equifax compiled information on me without my explicit consent.

Equifax compiled that information from vendors very similar to Facebook.


"Facebook CEO Mark Zuckerberg and designated compliance officers must independently submit to the FTC quarterly certifications that the company is in compliance with the privacy program mandated by the order, as well as an annual certification that the company is in overall compliance with the order. Any false certification will subject them to individual civil and criminal penalties."

This is actually the big thing here. Any disclosure or breaking of the data makes Mark Zuckerberg personally liable.

We'll see if that gets Facebook's attention at all.


Someone I work with found this:

The interesting part is in the write up from The Verge (https://www.theverge.com/2019/7/24/20707013/ftc-facebook-set...):

> "The settlement’s $5 billion penalty makes for a good headline," FTC commissioner Rohit Chopra wrote in his dissent. "But the terms and conditions, including blanket immunity for Facebook executives and no real restraints on Facebook’s business model, do not fix the core problems that led to these violations."

$5 billion of cost with shareholders' money. Not bad.


Did you read the entire FTC write up? Facebook essentially doesn't have control over how they handle privacy data anymore. I'd argue the fine is the least damaging aspect.

"As part of Facebook’s order-mandated privacy program, which covers WhatsApp and Instagram, Facebook must conduct a privacy review of every new or modified product, service, or practice before it is implemented, and document its decisions about user privacy. The designated compliance officers must generate a quarterly privacy review report, which they must share with the CEO and the independent assessor, as well as with the FTC upon request by the agency."


Reports. Not approval.

Facebook has to _document_ what they're doing in the privacy world. They have to prepare a report of what they did, and be willing to share it, and accept the consequences (which they already effectively do, just more opaquely).

At no point does the compliance officer have to _approve_ "how they handle privacy data any more", nor the independent assessor, nor the FTC.

They still have complete control over how they choose to behave and steer. They just have to report on it.


If they really "fixed" the core problems at Facebook, Facebook would have to cease to exist. Their business model from the very beginning has been based on collecting highly intimate personal information from everyone and using it for Zuck's gain.

Facebook ceasing to exist is probably not a bad idea, but it's rather unrealistic.


It'll probably exist forever, but the more we expose FB's rampant wrong-doing, the less people will use it. The young have already moved away from it, once older generations die off, FB will then start dying off as well...if only they weren't allowed to buy all their competitors.


People here missing the big win for Facebook, which is retrospective:

The FTC settlement indemnifies Facebook for "any and all claims prior to June 12, 2019"

https://twitter.com/ashk4n/status/1154027557055975424


Is this true only in the US? Or can Facebook argue that the settlement with the FTC effectively indemnifies them across other jurisdictions? I remember there being something about courts in different countries respecting each other's rulings, but not sure how such an argument would play in (for example) the EU.


it's true only in us


But then GDPR is effective only in the EU.

At some point, wouldn't it be simpler to have the same policies globally? Or is that too simpleminded?


Pity to see this downvoted. That's called 'harmonization'. It is one of the bits that makes copyright law such a powerful law because the Berne convention synchronized a whole pile of stuff in a very large chunk of the world. Similar activity tends to occur in neighboring countries with respect to tax law to avoid capital flight.


I was thinking more of the arguably disproportionate impact that California auto-emission limits have had in the overall US market. As I understand it, it would cost too much to produce multiple state-specific versions. So the high-population state with the lowest limits effectively sets the US standard.

But maybe that doesn't apply as much to online stuff. Because marginal cost isn't so sensitive to volume.


GDPR is effective only in the EU. If you don't do business in EU, you can ignore it.


Except that if you read the text he's quoting, it doesn't say what he claims it does. The settlement only covers claims that Facebook's actions prior to June 12, 2019 violated the previous 2011 settlement, as well as certain kinds of claims already known to the FTC at the time of the settlement. If they find out that Facebook has been doing other shady things that violate consumer protections the FTC can still take further action against them over it.


>Facebook must exercise greater oversight over third-party apps, including by terminating app developers that fail to certify that they are in compliance with Facebook’s platform policies or fail to justify their need for specific user data;

>Facebook is prohibited from using telephone numbers obtained to enable a security feature (e.g., two-factor authentication) for advertising;

>Facebook must provide clear and conspicuous notice of its use of facial recognition technology, and obtain affirmative express user consent prior to any use that materially exceeds its prior disclosures to users;

>Facebook must establish, implement, and maintain a comprehensive data security program;

>Facebook must encrypt user passwords and regularly scan to detect whether any passwords are stored in plaintext; and

>Facebook is prohibited from asking for email passwords to other services when consumers sign up for its services.

This is great! What do we need to do to get this to apply to other data harvesting companies like Google and Microsoft?


Which of the items in this list currently applies to Google or Microsoft?


>Facebook must exercise greater oversight over third-party apps, including by terminating app developers that fail to certify that they are in compliance with Facebook’s platform policies or fail to justify their need for specific user data;

That applies to Google and Apple, no? To apps in their stores.


>Facebook must encrypt user passwords and regularly scan to detect whether any passwords are stored in plaintext; and

Why are they storing passwords at all? It's not necessary for authentication. They should only be storing a hash, or better yet a public key derived from the password on the client.


It came to light that they were storing passwords in plaintext in an application log. In theory they only store hashes in places where they actually intend to store passwords.


I think this refers to previous stories where there were reports of passwords being logged as part of what I assume was request parameter logging.


Most of this is already restricted under GDPR, or at least is not advisable given the potential fines that couple be imposed for a leak. Other parts of the GDPR which should help are blatantly ignored (e.g. privacy popups that are checked by default), so if the US were to implement similar legislation it would be a big win for privacy.


Facebook is also already subject to GDPR


And now that those regulations are in place generally Facebook is locked in at the top. And all it took was a $5 billion dollar single payment. The three gears locked together unable to turn iconography they use in the article represents this well.


I had a similar question: can't these regulations be generalized? Or are we going to have these same problems in 10 years with a company other than Facebook?


I still find Facebook has one of the most toxic cultures of any company out there. All they've ever done during their history is betray user trust, but it's ok as long as they keep 'apologizing' and then doing nothing to fix it


Did you work there? I worked there for 2 years (it was a mistake), and found that the culture wasn't toxic, just obtuse. They never questioned what they did, or thought about the implications. They focused on "move fast," without really thinking through implications of things.


Working somewhere and seeing a place from the inside has nothing to do with the fact that the company is toxic or not.

I think we all agree that Facebook takes really good care of their employees and keep them happy, whatever it takes. That doesn't prevent the company to produce a product that is toxic to humanity. It is of course difficult to see that when you are extremely well paid and your salary depends on you producing that product.


> the culture wasn't toxic, just obtuse. They never questioned what they did, or thought about the implications

I question your distinction. Toxic behavior is largely caused by people not caring about harm caused to others.


If you have a few minutes, I wonder if you can expand on that? Does "it was a mistake" stem primarily from '"move fast," without really thinking through implications'? Thanks!


That's a huge over-reaction.

One reason Facebook came to dominate over older social networks was that they provided much, much better user privacy controls, they were better at keeping fake users off their networks and they kept the site more secure. Before Facebook social networks didn't have APIs and thus didn't have permissions - everything just scraped the (always public) profile pages.

Moreover many of the things they're being rapped for here aren't even betraying user trust. I believe the only time they've ever asked for passwords for third party services is to import friend lists. That's a useful and optional feature. If a mail provider doesn't have an OAuth style API, how else is it meant to work?

I know everyone has decided it's cool to bash Facebook because they're big and use advertising. But when I look at the actions of the FTC here, I wonder how on earth such trivial things turned into such a huge fine. Advertising doesn't kill anyone, and targeted advertising is much preferable to the noisy punch-the-monkey type of barrel scraping ads the internet used to be filled with. If they use a phone number to better target ads to me (presumably regionally) how is this any different to using my IP address? At worst it means I see more local businesses. Yeah, if they'd had some small print somewhere, a few people would have cared. Most wouldn't and we know this because they use WhatsApp which already gives Facebook your phone number. Big deal?

I just can't get worked up about these trivial "evils" that mostly exist for my own convenience, or which are irrelevant given the set of services me and all my friends use i.e. Facebook gets the same data anyway voluntarily. Why am I the only one?


They certainly don't provide better privacy controls today, if they ever did. And even that they ever did is unclear. https://www.wired.com/story/why-zuckerberg-15-year-apology-t...


Oh man, while I agree with you, I don't see your arguments landing well with this crowd


https://www.indiatoday.in/technology/news/story/mark-zuckerb...

Zuck: I have over 4,000 emails, pictures, addresses, SNS

[Redacted Friend's Name]: What? How'd you manage that one?

Zuck: People just submitted it.

Zuck: I don't know why.

Zuck: They "trust me"

Zuck: Dumb f*s.


Former Chief Security Officer of Facebook on how the FTC judgment is fantastic for Facebook and terrible for users (as @zooko summarized it): https://twitter.com/alexstamos/status/1154025820890865664


I cannot stand Alex Stamos takes on all of this.

He WAS the CISO for Facebook during all the biggest scandals. He WAS the person responsible for overseeing all of this.

He decided conveniently to exit Facebook once the bad press came over and now he is trying to play the "expert card" on everything related to security and democracy. It's really a shame that we don't keep him more accountable.


Can anyone with a firmer understanding of the issue tell me if this is an appropriate penalty or not?

Should we expect to see some of these prohibitions generalized to more companies?

* Facebook is prohibited from using telephone numbers obtained to enable a security feature (e.g., two-factor authentication) for advertising

* Facebook must encrypt user passwords and regularly scan to detect whether any passwords are stored in plaintext

* Facebook is prohibited from asking for email passwords to other services when consumers sign up for its services



>Facebook is prohibited from using telephone numbers obtained to enable a security feature (e.g., two-factor authentication) for advertising

Yeah, I'm honestly surprised that one wasn't already illegal.


An appropriate excerpt from Matt Levine's column (the gist is that relying on the FTC, rather than Congress, is backwards):

"Americans are biased toward thinking of bad things as being already illegal, always illegal, illegal by definition and by nature and in themselves. If the thing that Facebook did was so bad, then it must have been illegal, so there is no need for a new law against it. At most we need a settlement with Facebook clarifying exactly which things it did were illegal and specifying that it won’t do them again."

Before Mark Zuckerberg (and possibly now Google), we didn't really have anyone abusing personal data to such an extent that what they are doing is clearly, to the average person, wrong. Dumping toxic waste into waterways/letting it reach the groundwater is probably a similar example.

Every now and again an individual, company or industry comes along who is callous enough to make money off the things that surely must be illegal before and until they actually are.


If there isn't escalation in price for violations past this in the future, this may just be the new price to play ball.


This is not a sustainable amount to be fined. I think they made a net income around 20 billion last year, this a quarter of that. A couple more of these fines and that's a really dangerous situation - companies at Facebook's stage are valued on multiples of earnings with growth, not revenues, and 75 to 50 percent reduction in earnings wiped a lot of shareholder value off the table, meaning the shareholders (basically just zuck) would need to think twice before committing these violations again.


Except it didn't whipe a lot of shareholder value off the table, the stock rose by 5% after fines were announced. Zuck himself made a cool billion off the bump.


News like this is already priced in and the market reaction is an irrational mix of random expectations. Some would have thought it would be more, some will think that this the end of Facebook's regulatory troubles, some will think that people overreacted over the last few days and are buying now that the uncertainty of the fine is gone and we know how the settlement stands. If this was a surprise 5 billion, then the stock would have dropped by way more than the 25% you might expect because of panic. You also have the growth story.

A good way to put it is that the stock price would not have changed by more than a few percent today anyway, but in a world with no fine the starting price would have been higher. And in a world where we see more and more 5 billion dollar fines, they'll be less and less shrugged off as one offs even without increasing penalties.


That fines were coming was known, and reflected in the stock already. The announcement reduced uncertainty, causing value to rise.


A larger fine was priced in because investors were uncertain of earlier estimates of the fine being accurate. When the settlement news was announced, the stock bumped because it was better than expectations.


These kinds of fines are rarely lump-sums. The FTC will probably have Facebook on a nice 10 year plan for them to pay it.


According to Kara Swisher and Scott Galloway, this fine is far too small to make a difference to Facebook's behavior.

A fine of a size that exceeded their free cash ($40 billion) would have given FB a huge incentive to clean up their act. A fine of this size is nothing but a toll to pay.


Don't forget that future violations hold Zuck et al. personally liable for violations. Potentially criminally liable.

> Any false certification will subject them to individual civil and criminal penalties.


Oooh, not quite. IANAL, but I've worked with enough to know exactly what this means.

Facebook is required to _report_ how they handle privacy data. They can choose what that handling looks like. They just have to accurately describe that. The false certification is related to materially false statements in the _reporting_. How they handle privacy data is still up to them (albeit potentially subject to future sanction if problematic), but this definitely does _not_ state that Zuck will be subject to criminal liability for future privacy problems.


Why would there not be any escalation?


The fear is that FB makes more donations to political campaigns which would ease the pressure on future violations.

The campaign finance laws in the US are a joke and it would be easy for them to bribe politicians with campaign assistance.


It's sad that their stocks are still up after the announcement.

It is also sad that Facebook is still seen as a somewhat prestigious employer. Some of my friends still brag about working there as if it was the pinnacle of their tech career.


What is the practical process in which $5B is handed over? Is it done over a timespan? Does it get tied up in courts for years and years?


Serious answer: the court has ordered them to wire transfer it within seven days- it’s currently in escrow with their lawyers. Probably FedWire.


Libra! /s


Where does this $5B go, exactly? Into the general fund? If it goes into the FTC's budget, then something is very wrong. This money came from our citizenry, and should be restored to us through something meaningful.


To the US treasury, for general use.

Most (all?) federal fines are this way, although if they specified that some of the total were to be used for direct compensation to victims, then that could be included in the total as reported by the media but obviously wouldn't be spendable by Congress.

I can't point to a case where that's ever happened, though, so the answer in practice is "the US treasury".


OK, I have a probably dumb question. In TFA, I see:

> Following a yearlong investigation by the FTC, the Department of Justice will file a complaint[0] on behalf of the Commission alleging that Facebook repeatedly used deceptive disclosures and settings to undermine users’ privacy preferences in violation of its 2012 FTC order. These tactics allowed the company to share users’ personal information with third-party apps that were downloaded by the user’s Facebook “friends.” The FTC alleges that many users were unaware that Facebook was sharing such information, and therefore did not take the steps needed to opt-out of sharing.

So does this settlement correspond to that complaint? I'm guessing that it must. But then, that means that the FTC negotiated the settlement with Facebook based on an unfiled complaint. That rather implies that Facebook had some influence over the complaint.

I guess that's not very different from plea bargaining in criminal cases, however.

0) https://www.ftc.gov/system/files/documents/cases/182_3109_fa...


Is the "poor, greedy EU is trying to make money by taxing American tech giants via fines for nothing" narrative still alive and well?


I think it is and it should be fine. Unless of course there is memo that EU should be able to fine whenever they deem fit but people must not allowed to criticize EU behavior when they find it egregious.


This isn't going to do anything. It's similar to an SEC fine. All it's going to be is a legal write off for the company. What really needs to happen is companies need to be broken up or violating privacy without consent needs a criminal penalty. Strict. But I don't see how monetary fines damage companies this large anymore...



I decided to change the email address I have registered with facebook after reading this ruling and seeing the long list of advertisers who uploaded customer lists with my email address. However, facebook blocks me from updating my email address and instead displays messages like "Sorry, this feature isn't available right now"[0].

Has this happened to anyone else? Is there anything I can do besides creating a new account with the right email address?

0: https://i.imgur.com/4WEBAXD.png


Surprised how little coverage this is getting here. Maybe the FTC should have added another zero to raise some pulses among the tech crew


I think an important detail is that as part of the settlement Facebook is exonerated from all past 'mistakes'. This is actually really good for Facebook and was the wrong move by the FTC. This will encourage future privacy abuses as now they know they can 'get away' with it.


“This settlement’s historic penalty and compliance terms will benefit American consumers, and the Department expects Facebook to treat its privacy obligations with the utmost seriousness.”

So does this in theory apply just to the data of FB users who are American citizens (although I guess practically it may not be feasible for FB to treat non-American user data differently)?


The most important privacy enhancement they could make is nowhere to be found in this settlement:

Provide easy means for users to permanently delete all information, posts, comments and messages, as well as a setting to automatically expire and delete all if the above as desired. This should include backups.


This is useless because of external backups, and not desirable because of things like posts becoming evidence of criminality.


Not useless at all. If websites are required, by law, to protect your information, seek permission to share it and also institute retention policies based on your preferences, this would be a huge improvement in privacy.

If by external backups you mean my cousin copying one of my photos and saving it or something like that, sure, I take your point. However, I think you understand exactly what I mean. There's a massive difference between that and Facebook having an entire record of my life, complete with tags, pictures and classifications, all of which I have exactly zero control over.

And, no, social media does not exist to facilitate law enforcement investigations. If that is their primary purpose they need to disclose it. It so happens that it can be used that way, but we should not pass legislation based on the possibility of law enforcement using these databases to effectively spy and reach for people's private information.

Cautionary tales abound, incuding:

https://en.wikipedia.org/wiki/First_they_came_...


I'm curious why this is getting flagged. I suppose that it's so contentious that it's generating pointless argument. But it is a remarkable action by the FTC. Especially given concerns after Obama-era privacy initiatives got reversed.



So I should be free to break any law less serious than murder, I'm sure a judge will see the wisdom of your argument when I appear before them.

Gonna make so much money robbing these banks.


I get that fines don’t always offset the benefit gained from shady practice, but frankly I don’t think Facebook got $5B from this. That’s ~25% of last year’s net income. That’s a punch in the jaw.


Considering their stock rose significantly after this fine was announced, I don't think the people making money from Facebook agree with you.


Given my relationship with Facebook I want to agree and say the fines are way to small, but for what I know it could also be that they hurt badly and the reasons the stock rose was just because the stock marked had priced in an even bigger fine.

Edit: yep, lots of them.


Investors have known for a long time that a fine was coming. This was already priced-in. Facebook has been planning on this and reporting it in SEC filings, which they are required by law to do. Now the matter is closed, and for the amount that they already warned investors it would cost. There should not have been any downward movement in the price, and there wasn't. Now the matter is put to rest, and the market likes that. If anything, the price should have gone up. Stability and predictability are good.


> That’s ~25% of last year’s net income

net profit.

And accounted for over the last few quarters already.


Net Income == Net Profit


I stand corrected... I was thinking of revenue.


The gear arrangement in the diagram showing how it will all work cannot turn (triple gear engagement)!

Probably an unintentional but more accurate representations than what they had in mind.


I'd be curious to see where this fine goes. From what I seen most FTC fines go to "the U.S. Treasury for general use". Not even Congress gets a chance to mismanage it.

Then again, if they take it out of circulation, maybe it is a net gain for everyone?


at market cap close to 600B this is less than 1% of its "wealth"


where should they send the check to so i can get my payout?


Any anti-GDPR advocates want to chime in on all those huge fines that the EU was going to levy on every American company?

Props to the FTC for the size of the fine and the detailed explanation, not so good that the perps get to walk - again.


>$5B

So, basically free.


Take a look at NASDAQ: FB. <0.5% change. This means nothing.


Now up almost 4% after-hours on earnings :)


That fine is a joke. They make like 5x that amount every month.


Do you know something that FTC does not? From the article itself:

> Facebook monetizes user information through targeted advertising, which generated most of the company’s $55.8 billion in revenues in 2018


It was definitely quicker to type in "fb revenue" into Google than type the sentence above...


It's 1/4 of their annual net income. It's far from a joke, but it is true that it isn't ruinous.


Depending on their profit margin, although I suspect it's bigger than 25%


It's basically nothing, compared to what they gain from not giving a damn about privacy. Which they'll continue to do.


Is there a specific amount you think they gain from not giving a damn about privacy? $5B is a lot of money! I worry that if you think it's "basically nothing", you would've thought that no matter how much it was.


Facebook's whole business model is to not give a damn about privacy.

Thus, all the money they make. All of it. The FTC now takes a small portion.

See Facebook care. (not)


But you're equivocating here. They're allowed to not give a damn about privacy in the broad sense you're using; the FTC fined them only for the small subset they weren't allowed to do.

If you think the government ought to just ban Facebook, sure, whatever, but that's not the FTC's job.


Put it the other way around:

5B is one time 25% (or 9% of 55.8B revenue) tax. It's a far cry from a joke; certainly it's higher than AZ or other members of FAANG pay, while not outright terrible.


GDPR can fine 4% of total global revenue. In 2018 facebooks revenue was $55 billion. $5 billion is 9% of their revenue last year, it’s probably less this year. It is almost a quarter of their 2018 net income.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: