Hacker News new | past | comments | ask | show | jobs | submit login
Only 9% of visitors give GDPR consent to be tracked (markosaric.com)
651 points by luu on July 7, 2020 | hide | past | favorite | 446 comments



What really drives me crazy are prompts that start by showing two options: "Consent to all cookies", or "customize". If you click "customize", it opens a new modal window with a loading indicator that just doesn't seem to finish. I literally waited 60 seconds and then tried again by refreshing the page, ending up with another infinite loading indicator.

This means that users are de-factor forced to click "consent to all". I'm not even sure if that's legal.

Now, this was not some obscure small website, but the official Java documentation on docs.oracle.com! They use some third-party service for that - I just tried again, and though it worked this time, it still took 30+ seconds to submit my settings. I have a very hard time to believe this has technical reasons. Either this was made slow on purpose, or built by a bunch of morons.

/rant


DockerHub uses a company/product called "TrustArc: TRUSTe" and they have _exactly_ this method. The slowdown is intentional. People who come up with sort of tactics and those who implement them should go to jail. It's beyond infuriating. I've decided to not upload my images to dockerhub because of this.

Edit: jail time is not for anger obviously, but for intentionally swindling people. It'll be handled on a case by case bases obviously, but data is worth something, people who swindle you out of your data are like those who scam you in the street for your wallet, and as far as I know, there's jail time for that.


> The slowdown is intentional.

Yes, it appears to be. If you check the network tab of your browser, you see it makes about 8 requests. Then it waits for about 5-10 seconds and makes again 5 requests.


I think there should be some sort of public shaming for shady practices like this, so large companies like Oracle would seriously consider alternatives.


Since your edit seems to be taking the jail suggestion seriously:

I think the data-money analogy isn't unreasonable, but you're overloading "swindle" pretty heavily here, in a way that defeats your point. Companies and people "swindle" others out of money all the time with dark patterns, and this predates computing by ten thousand years. Caveat emptor exists for a reason: the legal system is just too inflexible a tool to cram a person's view of "honesty in business" into.

Defining "making one side of an option an iota more of a hassle than the other" as criminal is broad enough that you're pretty much guaranteed to catch many honest transactors too. The law is a really blunt instrument.


I agree. I’m certainly taking it from my own point of view, and I might be overloading “swindle”. It’s probably a bad choice of a word here. It would have to be on case by case basis anyway, but even with the inflexibility of the system and the impossibility of jail time for every theft case, I think it would have to be at least an option when the intent on scamming for the purpose of selling the data (like those mailing lists) is clear.


There are certain patterns that are illegal though, and presumably have legal penalties (maybe not jail time, but presumably at least fines). False advertising is the obvious example. I don’t think it’s unreasonable to support strengthening the laws around these things.


I’m not a lawyer.

If memory serves me, the EU does not Operate on the same legal framework as the US, so the assumptions I hear here: that the exact letter of the law they’re suggesting is worrying, is not How it works in the EU. They also don’t use case law to define exactly how something will be dealt with.

It’s a lot more fluid and ‘spirit of the law’y there.

I think.


Causing you to get angry is all it takes to warrant justification for jail time in your eyes? I am thankful you aren't in charge of any lawmaking.


It seemed very clear to me from his comment that his justification didn't revolve around him getting angry personally, but the harm these features cause over time. With that being said, your post seems disingenuous. I personally don't believe in jail time for this, but very heavy fines certainly. People need to care about these issues.


Jail time is probably the only suitable enough deterrent for adtech nightmares. Fines are just treated as a cost of doing business expense, and doesn't directly even hurt the terrible people making these decisions.

When someone commits a wrong, the punishment is in part based on the amount of harm. You might feel that adtech causes small harms, if you look at harm against an individual, but adtech folks harm billions of people, every single day.

If you're Mark Zuckerberg, Larry Page, or Sergey Brin, no fine is large enough to make you regret your life choices. The only way to punish billionaires who get rich off harming others is to take away their time/freedom, the one thing they can't just buy back. As long as corporate CEOs can't get jailed, crime absolutely pays in this country.


Exactly. Facebook and Google have been fined in the past and it’s never amounted to anything for them. When they make more Money from engaging in bad behavior than the fine costs, it’s a law they’d be stupid not to break.


Then make the fines double what they made from it.


No amount of fines would make a difference? I beg to differ. Make the fines high enough, bill them to the executives personally, I guarantee it will make a difference.

A large enough fine could literally take the company and thus their power away from them. A large enough fine could literally bankrupt the company.

The problem is simply that the fines aren't big enough.


That just means you need fines proportionate to wealth, not fixed numbers.

A truly useful penalty would be to forbid them from owning any capital (directly, or indirectly like via a trust or index funds).


Most studies pretty clearly show that harsher punishments are not an effective deterrent.


So the alternative is... we don't punish them at all? Bear in mind, CEOs face no real penalty for criminal activity.


This refers to the "normal" people not businesses or rich execs. Giving them for example 20 years instead of 5 doesn't change much, because 5 is already a lot. If someone does crime, because of poverty, being mentally ill, or because of addiction no jail time will be a strong enough deterrent.

If businesses actually would have fines that significantly hurt the business (and not being able to deduct it from taxes, if they were unable to pay, they would have to close down) would absolutely help and would change the calculation from fine being a cost of doing the business to a punishment for illegal behavior.

Having executives be personally responsible for decisions and go to a real jail would be even a stronger motivator.

The current problem is that there's no real deterrent.


> Most studies pretty clearly show that harsher punishments are not an effective deterrent.

It doesn't need to be harsh, 3 days in jail with a review in six months to see if they've fixed it. Then an additional 3 days in jail. Pretty much any jail time would be adequate for typical C-Level execs. Also, not "Raped in the ass" prison time, just a mellow 3 days to contemplate their shiftiness locked in a comfortable room.


That may be, but a threat of jail hanging over a director or CEO may make them take the correct course of action - effectively becoming the deterrent.



Your source doesn't actually apply well here. It talks about things like substance abuse and addiction being reasons that criminals don't act rationally to punishment as a deterrent. For white collar crime, there's a very different set of circumstances in play. Crime is often just a business decision based on risk and reward. Raise the risk and the reward becomes less worth going for.


This should be punishible by law, jailtime included. The intent of the law is to give the consumer a choice. What these companies are doing are trying to circumvent the law by making it extremely difficult and annoying to exercise that choice.

Very similar to trying to cancel a spammy subscription (but that's not mandated by law, so)


No, hurting others and breaking the law justifies jail time and causes anger.


I wonder if justice has improved over time because we've gotten better at tolerating things that make us angry, or because we've gotten better at not getting angry.


I'm guessing its more about knowingly and intentionally breaking the law bit, then that he was angry they were breaking the law.


Jail might be a bit much, but giving them the maximum fine allowed under the GDPR seems quite reasonable to me. I'd even go so far as to try and transfer the debt to the individuals responsible if this causes the company to go bankrupt, as this is clearly malpractice.


> I'd even go so far as to try and transfer the debt to the individuals responsible if this causes the company to go bankrupt

So who's liable in this scenario? Someone in the C suite? The coder monkey who implemented it? Someone in between? All of the above?

Unless I'm mis-remembering, the maximum fine under GDPR is a % of the total revenue of the company, so depending on the size of the company, this could result in bankruptcy of any individuals deemed responsible too.


I'd say the stock owners.


If the risk of investing is goes from (Value of your investment can drop to zero) and is now (Your entire personal fortune is at risk), what do you think is the outcome? Thinking 1 month, 6 months, 12 months, 5 years.

Or to put it another way, is causing another massive economic shock the correct response to a dark patterns on websites?


The stock market is not the economy. All you are ultimately advocating for is keeping the exiting corrupt corporate leadership in charge and preventing market forces from replacing them with ethical leaders.


When you give someone money to do a thing you have responsibility about that thing. If someone is running a clearly illegal business (trafficing illegal drugs, humans) and you are aware of this and invest in their business then you become an accessory you are not immune to liability for crimes you were aware of before investing - you are immune to liability for actions taken without your knowledge (which is what we assume the great deal of actions taken by investees are).

Causing another massive shock to the stock market is the correct response here because nearly everyone is engaged in behavior that harms society and should be stopped. Also, the market will recover and really isn't that important. If a small proportion of actors were engaged in this activity then getting an industry compact or other token gesture might be good enough to correct it.


So pension funds? That smells dangerous


So pension funds and teacher's unions? Grandpa's nest egg?


If people were affected by the things they give companies money to do, perhaps they'd be a lot more careful about what they give companies money to do.


To the point where it's not worth giving companies money at all. Stockholders aren't involved in day-to-day ops, anyway, so you're placing the blame at the wrong place. It should be placed on the C-suite.


The problem with modern capitalism is that people can get all the benefits but avoid the bulk of the risk


The alternative seems like a strange land where we imprison waste management workers for their pension funds being invested in companies they have literally have no control over and no ability to discover the behavior of.

Or should we restrict the stock market only to the rich and powerful?


I mean... how much of the world do you want to feel like you married a sociopath (aka people who deliberately make the better option miserable), before someone can say 'lets disincentivise this behavior


Why jail? I think we should just execute them on the spot. /s


Just like we do in the USA for the crime of 'suspicion of selling loose cigarettes'!


What's even worse is the same system (Customize) to then land on a 30 page privacy statement with instructions how to opt-out with each of their 140 "partners".

Alternatively you get advised (also in the context of that 30 page privacy statement) to disable cookies.


Which, after laboriously opting out of all 130 options, will show you an error message that not all partners currently except opt-outs. It's infuriating.


If people _cannot_ opt out, then the opt in is legally meaningless. i.e. people cannot legally opt in.


The GDPR says that you only have consent if it is as easy to opt out as to opt in. So if it's one click to opt in and 140+ to opt out, then the opt ins don't count. So you're processing personal data without consent.

That's the way we prosecute them.


This is like reCAPTCHA all over again.

Sure, we don't block accessing our site from a proxy, we just make you select the traffic lights forever. Is a crosswalk light a traffic light? Sometimes. Does it count if only the edge is in the picture? What if it's almost completely out of shot? Let's also slow down the loading of the images to a crawl for no reason and sometimes even if you manage to wait for all of them to show up and select them all correctly 20 times in a row we just say you fail anyway and send you back to the beginning.

There's no point in creating the line if you make the line endless. I'm not surprised Oracle of all places is doing what you've described, either.


If I can't opt out in two clicks, I leave the website and click the next search result.


Yeah, but that's kind of a pity if this one was the official documentation...


Or I just right click open it in private mode


No guarantee that they won’t track you through other fingerprinting methods.


When do you ever have that guarantee anyway, though?


I'm not a statistician, but I'd think the less they can fingerprint you, the more you go in the "noise" bucket.


OneTrust, Evidon, and the rest of the "privacy management" services...why can't I just click once at their site that I don't want any cookies from any of the companies that use them?

I just went through a thing with Evidon where each of their opt outs is a JS-required redirect, and when you get to a company like Adobe, you have to click on a million things with a million disappointments.

https://imgur.com/a/d2qMJqn

It's intentional, and fuck you if you don't like it. Sorry, more like FUCK YOU if you don't like it.


It's clear, that the GDPR should be amended so the only way that tracking will be allowed is to have user send a notary signed letter every year saying that s/he wants to be tracked with a minimum 3 pages of attachment explanation why tracking is so great (in own words, no templates allowed). The company will be required to keep physical copies of these letters for 10 years for random checkups.

Fuck the adtech industry.


dont you like personalized ad's? btw. I do not get the adtech industry either, most of these targeted ads are definitiv not in my favor.


I don't, the only times that I clicked on ads was by accident or site did some shenanigans to make me click on it. I absolutely never bought anything based on ad on site.


I know exactly which prompt you’re talking about, it’s very common around the web.

Unfortunately, the only solution I found that fixes it is disabling ad and tracker blockers. They seem to break that prompt, though if you set them to be very aggressive, the prompt disappears altogether.


You could report the site to the governing authorities as well.


The solution is to fix your ad-blocker. Any good one will also block the consent management garbage from loading to begin with. uBlock Origin with all the "annoyances" lists enabled will block them all according to my experience.


That’s what I meant by “setting to aggressive”, though I should’ve been more clear on that.


Disabling javascript works pretty well too.


I personally wish that making user-hostile dark patterns on your site would attract the same kind of shame as saying something racist. People would learn _very_ quickly not to do it, and the police/state wouldn't need to get involved.


It's the official solution created by Interactive Advertising Bureau and as the standard way to opt-out for all the companies in this organization: https://github.com/InteractiveAdvertisingBureau/GDPR-Transpa...

There are serious doubts if this is a complaint way to handle cookie consent.


Slow on purpose.

All incentives for the site owners are against user interests.

No one benefits from users clicking to No Consent.


> No one benefits from users clicking to No Consent.

The user does. Or did you mean something else I am missing?


I meant the owners of the site and advertisers.


Sites already have plenty of information in order to run good advertising on their content.


Are there any good examples where the consent is done right?


You can't take someone's lifeline away and act surprised when they try to get around it. Of course websites will do what they can to remain viable.


There is many people that will change their business model once it's unlawful. Not everybody is willing to break the law.

But, there is people that does not care for their customers nor the law. From safety food violations to illegal pollution of air or rivers. That's why we need more strict law enforcement for companies.


Right now, most places are dragging their feet, because there is at least plausible uncertainty. They're waiting for enforcement actions to see what the limits really are, and then they'll likely comply (or cut off access to EU ips, which doesn't strictly comply, but is more effective than asking people if they are EU persons and then denying service)


Advertising doesn't depend on tracking users, please stop spreading this incorrect idea.


Reminds me of trying to opt out of DAA ads[1]

I have never had much luck using it. I just tried now to see if it's different I managed to successfully opt out of 10 / 123. Clearly intentional or all these billion dollar ad companies are running their servers on broken raspberry pis.

[1] - https://optout.aboutads.info


Someone should write the next level of counter measures that just sends a bespoke cookie that contains the do not track info based on patterns similar to ad block - without ever having visited the site beforehand. Or if that's not working for some sites pretend you accept tracking but just forge the tracking ids with random data every visit


Or just use EFF's Privacy Badger.


I am not even sure, if it is legal to make the opt-in easier than the opt-out. But I guess that is some kind of grey area nobody really cares about, even if it is crucial from a UX perspective.


Just tried that (twice) - it took literally a fraction of second to open. Probably it was a temporary issue?


A dark pattern from Oracle? That's unexpected. That's why people are abandoning Java, by the way.

But yes, isn't there on the GDPR that tracking must be opt-in? I don't see how the pop-up is legal, and making the opt-out inaccessible is probably a large violation.


The standard popups are 100% against the GDPR. In the GDPR, all consent must be explicit, uncoerced, and opt-in. If I recall correctly, there can be a request displayed to the user, but the "No tracking" option must be the default, and must not require any more user interaction than the "Yes tracking" option. If there is a "Yes tracking" button that immediately closes the banner and continues, then even having a "are you sure" dialog on the "no tracking" button breaks the GDPR.

The GDPR gets this exactly right, and advertising companies are flagrantly breaking it. I'm hoping that there is actually some enforcement on it as well.


Can you cite the part of the regulation that requires “no tracking” must not require any more user interaction than the "Yes tracking" option.


Everyone here seems to be unaware of article 7, section 3:

>The data subject shall have the right to withdraw his or her consent at any time. The withdrawal of consent shall not affect the lawfulness of processing based on consent before its withdrawal. Prior to giving consent, the data subject shall be informed thereof. It shall be as easy to withdraw as to give consent.

Emphasis added.

https://gdpr-info.eu/art-7-gdpr/


Are there some semantic games being played here whereby the initial “Accept vs Customize” dark pattern dialogs don’t count as “withdrawing” consent because no consent has been given at that point? I.e. the annoying path of clicking Customize isn’t actually the withdrawal process, but is just the method of gaining acceptance?


They're definitely trying that as a loophole, but it won't hold up. "No action/response" can't count as consent, and so if you never granted it, then they don't yet have it to begin with and aren't allowed to collect.


Also relevant bits from recital 32 https://gdpr-info.eu/recitals/no-32/

> Silence, pre-ticked boxes or inactivity should not therefore constitute consent.

> If the data subject’s consent is to be given following a request by electronic means, the request must be clear, concise and not unnecessarily disruptive to the use of the service for which it is provided.


Usually the "no tracking" option requires no action. Simply don't press anything in the banner.


Then you get nowhere. The actually compliant popups (maybe about 1/3?) have a really hard to see "reject" button somewhere, while the others make it impossible or force you to click "learn more" (of course customzation options are behind that, very logical yes) and deselct everything manually.

Asshole design ("dark patterns") should be illegal generally, but in this case it already clearly is. Can we have some enforcement, already? Ideally for both buyers and sellers of the terrible stuff.


Yes, and a lot of dismiss buttons -- not just for this, but all kinds of ads/pop-ups -- flat out don't work on mobile landscape mode, which gives them plausible deniability.


The author's consent form is very simple and isn't using any shady UX tricks to get the user to consent. One action will opt you in, one action will opt you out.

I wonder what results you would see for something like yahoo, the daily mail, reddit, or other sites that heavily rely on ad revenue, which attempt to force the user to accept the cookies through non-obvious no buttons, or long processes to opt out of cookies.


Verizon/Yahoo/Techcrunch is such a blatant offender, and especially noticable because they get posted here often. Their modal is a giant obfuscation dark-pattern, and far as I can tell, there is no way to opt out.


I think they're all Quantcast websites. I like that Firefox extension that adds a "I refuse" button: https://addons.mozilla.org/en-US/firefox/addon/qookiefix/


I'd like to remind you and everyone else about Firefox reader mode. Solves like 99% of this user-enslaving shit.


Screw that, I just close the page. If a publication disrespects their readers like that, then the content is trash anyway.


Outside of HN, most content creators, especially professionally, don't have the option to make significant changes to the platform on which their content is published. I agree that an entire publication's site can be trash, but good writers can still write on bad platforms.


They do as much as engineers at Google or Facebook do to change their companies.

That is to say, each one cannot make much of an impact, but the responsible thing to do would be to not contribute or at least openly advocate for change from within. If enough people inside shift, then change can happen.


I just don't see a world where journalists walk out on their jobs over cookie pop-ups. It doesn't feel like the same level as what Google/FB do, not to mention the supply/demand looks completely different in their industry vs tech so the power dynamic is a bit different.


And this is really the solution to all of it in the end. A user agent which is actually an agent of the user.

Can't wait until first party isolation is the default in browsers.


You can't wait till websites use first-party cookies to track their users?


This doesn't work on Yahoo sites which redirect to a different URL for the consent.


Temporary Containers: you can set all the cookies you want, but 15 minutes after I close the tab they are erased.

Oh and default no js means that I typically don't see those pubups in the first place.


I'm not going to do this because I'd still have to click the "I consent" button, which I do not.

Archive.is, it is.


Then give it.... in an incognito tab haha.


Incognito basically just means "doesn't show up in my history". There are many, many, ways to track users without needing cookies.


> doesn't show up in my history

AND drop all cookies from all domains.

> There are many, many, ways

There probably are. I haven't ever seen it work though. If I get into incognito, my ads show something different my normal profile.

In theory, they can track you from your OS/browser combination (and many more variables), but is there a way to test it?


It isn't theoretical. Google is currently facing a lawsuit (Brown et al v Google LLC et al) for tracking people who assumed Chrome's incognito meant Google would stop tracking them. [0]

[0] https://www.gizmodo.com.au/2020/06/google-facing-us5-7-billi...


That's a hilarious premise for a lawsuit, because that's never what incognito mode meant.


The EFF has a great tool demonstrating the method: https://panopticlick.eff.org/

It’s not theoretical; it’s used in practice today.


Font enumeration is the big one.


Reader mode is awesome! I'd love an option to automatically use it for any page that it supports rather than having to select it each time though.


This is what yahoo were doing in 2018 (it's my twitter) https://twitter.com/pos43/status/999606375217967104

I found a very convoluted way of opting out: https://twitter.com/pos43/status/1001331147110957056

It is now 2020 and I would be interested to see if these methods have been changed to make them easier or harder.


OAth is still the same


Check out the ublock origin annoyances list, which will block cookie and gdpr pop-ups without breaking most websites.


I couldn't get Verizon site to show me the cookie banner. I guess the banner code lives on another domain that my uBO blocked.


Which is why GDPR, although theoretically a good idea, is pretty much useless in practice. I'd like to see how many websites offer a reasonable consent widget that doesn't opt you in by default, keep nothing checking but a huge button to tick and accept, or the usual million-and-one checkboxes to untick, and many other cheap tricks that even the most vigilant of consumers will fall to at some point.

I use everything I can to prevent tracking on my machine. But for a general solution for everyone, I don't know what would be the answer, but I don't think it's GDPR. If it's to work, acts forbidden by GDPR need to be handled very fast, and without interaction from user nor legal mumbo jumbo. Without a zero-tolerance policy, the cheap tricks will only grow stronger.

Edit: So apparently if you criticize gov/bureaucrats on HN you get downvoted.


More a problem of enforcement than the legislation IMO. It wouldn't take many cases to be properly litigated before publishers would understand this is a law that is to be obeyed like any other.


Those two are linked fundamentally - even medieval and ancient rulers hardly what we would call enlightened knew that making unenforcible laws only breed contempt.

When they can set up anywhere and access everywhere it is asking to be flouted. Geolocated financial intergration is about the only area they can start to touch with enforcement. Without that they have about as much sway as a backwater dictatorship writing hate mail threatening arrest to every first world newspaper which refers to him as a dictator. Right or wrong morally they just look stupid and delusional.


The law isn't unenforceable. The law is unenforced.

The reason it is unenforced is that the agencies that seem to have the responsibility to do so are understaffed and afraid of the drawn out litigation and political backlash.


A ruling that some of these dark patterns are valid would be a disaster to the people trying to enforce these laws, so undoubtedly they're trying to build a watertight case to prove them invalid - knowing that the companies they're trying to enforce against have ridiculous amounts of funding and entire businesses will fall if the enforcers win.

All this on top of the larger concerns in the GDPR - for example, companies that, once they've collected data for one purpose, proceed to process it for another purpose without any legal basis at all.


The Irish DPC is deliberately dragging its feet on GDPR enforcement, which is a huge problem because the majors like Facebook or Google have their European headquarters there, not so coincidentally.


How can a company that violates law in 13 countries just choose which country gets to enforce?


Because the EU is a union of sovereign states. The EU itself is not sovereign and cannot do anything at all without acting through its members. A member can simply decide not to carry out the EU's wishes. Sure there are penalties, but they take a long time to appear and are fairly minor.


GDPR has a section that defines a way for an EU institution to "grab" cases, but that's timely and not used much (or at all, I don't know about the numbers).


I honestly don't trust governments/bureaucrats to be able to come up with a solution for this issue. This is one of those problems that evolves very quickly, and the solution probably needs to be done by a private company or by each person individually.


In 2016, I think, there were 0 airplane crashes and 0 air travel fatalities. Across 200 countries, hundreds of airlines and several airplane manufacturers, probably thousands of airports, millions of flights.

If you think that was an easy feat, then I don't know what to tell you.

Do you know how it was achieved? With finely tuned and ruthlessly efficient bureaucracy.

When people really care, bureaucracy works wonders.

Bureaucracy is basically formalizing social interactions for a specific topic. Formalizing something ossifies it, but it also prevents your pilot telling your copilot "Shut up!" just as the plane is about to crash into the mountain.


I can't find it now, but I once read a fascinating history of train braking in the US. Before modern air brakes became mandatory, trains had brake men who had to jump from on car to the other in order to fasten brakes. That had a high risk of the brake operator fallng off the moving train - and the whole train becoming uncontrollable. It was an extremely dangerous job and at the time train companies thought it would be too costly to change the system, so they resisted change.

> Bureaucracy is basically formalizing social interactions for a specific topic.

One could even say that bureaucracy is one form of organized collective action. Which in general is quite necessary for humans as a social species.


> 0 airplane crashes

I presume you meant commercial aviation, rather than aviation in general. Unfortunately not quite true even there. Your broader point stands though.

https://en.wikipedia.org/wiki/Category:Aviation_accidents_an...


> Your broader point stands though.

Well no, you've neatly disproved it.

The reason it works for commercial aviation and not anything else -- not even aviation in general -- is that the bureaucratic processes used for commercial aviation incur a massive overhead. When you have a product which costs a hundred million dollars a unit anyway and can kill 300 people in one shot if it fails, you pay the cost. For anything else it's too expensive, but spending less money causes the bureaucracy to be ineffective.

And even in commercial aviation, the overhead is still there, it's just capable of eating the loss. (Or maybe it isn't, given the miserable lack of competition in that industry now. And then where does that lead us on safety, Boeing?)


> Do you know how it was achieved? With finely tuned and ruthlessly efficient bureaucracy.

Not to dispute the effectiveness of a finely tuned and ruthlessly efficient bureaucracy, but pilots and airlines have a strong incentive to not have fatal accidents; websites however have a strong incentive to track their users.

To use an analogy, enforcing this will be less like mandatory driving exams and more like net-zero carbon emissions.


Airlines and airline manufacturers also have vast financial incentives to risk crashes by taking minor shortcuts. That doesn’t exist for your drivers test.


That's not in the airline's interest because then the plane crashes and they lose a hundred million dollar plane and get sued and go bankrupt.

It could be in an individual manager's interest because then they get a bonus and are working for some other company by the time the plane crashes, but the airline itself has the aforementioned incentive to put processes into place all on their own to prevent that from happening.


Losing a brand new hundred million dollar aircraft is a huge hit. Losing a 50 year old aircraft is a different story, especially when the options are to retire it or keep flying. Further risking a crash when your airline is facing bankruptcy suddenly looks like a reasonable trade off.

It’s not like a major airline is going to intentionally crash an airplane, but if they can trade 1 billion dollars for an extra crash every 20 years that’s a net financial benefit.


> Losing a brand new hundred million dollar aircraft is a huge hit. Losing a 50 year old aircraft is a different story

A 50 year old aircraft still costs tens of millions, and the biggest cost of a crash is the lawsuits anyway.

> especially when the options are to retire it or keep flying.

50 year old planes fly all the time. The options aren't retire it or keep flying, they're maintain it properly or don't.

> Further risking a crash when your airline is facing bankruptcy suddenly looks like a reasonable trade off.

Which is where the insurance company comes in, and we're back to having an existing bureaucracy with an incentive to prevent that from happening.

> It’s not like a major airline is going to intentionally crash an airplane, but if they can trade 1 billion dollars for an extra crash every 20 years that’s a net financial benefit.

The value of a statistical life is generally regarded as being about ten million dollars. Times 300 passengers that's $3 billion. So that's how much they can expect to get sued for when the plane crashes, in addition to whatever the plane was worth. If they're "only" saving a billion dollars, they're losing money.

And if they could somehow save more than 3 billion dollars then that's what they're supposed to do -- at some point safety measures cost more than the value they provide and VSL calculations tell you where that is. (And if you don't think so then I assume you never travel by automobile or buy anything that has ever been in a truck.)


Insurance on older aircraft much like older cars drops because their worth less.

Your lawsuit numbers are also wildly off. Ex: “The US aviation giant has settled the first in a series of lawsuits filed by families of 737 Max crash victims. Boeing will reportedly pay $1.2 million to 11 families of victims killed in the 2018 Lion Air crash.“ https://www.dw.com/en/boeing-settles-first-lawsuit-with-737-...


No they're not. That's what they're supposed to get. If courts don't actually give families that then the problem is in the court system, not the company.


It doesn't matter where the problem lies. The value courts assign to life informs the airline when they estimate how much they would lose due to a crash.


It matters where the problem lies if you want to fix it.


Your argument is all "logical" but it does not agree with reality, the boeing 737 Max is a prime example.


Only then you're proving the other point just as much, because the 737 Max is also subject to the bureaucratic regulatory system being held up as an exemplar of something that works.


For 2 crashes out of 387 aircraft and several hundred thousand flights to be considered unacceptably dangerous suggests the system is largely working as intended. The aircraft had 0 fatalities in US or EU, globally an order of magnitude safer than driving, and it was still grounded.

Imagine if we took car safety so seriously.


The thing is, there isn't a huge commercial vested interest in having plane crashes happen.


"but it also prevents your pilot telling your copilot "Shut up!" just as the plane is about to crash into the mountain. "

I am curious? How does regulation prevents that?

(and how does it prevent a suicidal pilot, from locking out the co pilot and crashing in a mountain on purpose?

https://en.m.wikipedia.org/wiki/Suicide_by_pilot

)



And the aviation industry is not exactly a massive innovator at this point. The most commonly used commercial aircraft today were developed before the world wide web was even a thing.

Why don't we just skip a few steps ahead - delete the internet and go back to cable TV? That's where we're headed for anyway.


I definitely don't trust private business to fix this. In case you haven't noticed private business efficiency is largely about externalising cost and internalising profits. So private business is completely I'll equipped to fix issues that effect society as a whole.


Sorry, privacy as a personal responsibility has failed outright. For many services there is simply no (online, i.e. practically relevant) alternative, because none of the market players have an incentive to be privacy preserving (think major news outlets) or because the service is not interchangable due to network effects (facebook, twitter etc).

The GDPR is actually sufficiently abstract IMO to make government enforcement possible and practical. And if you look at how tracking evolved, much of it is still the same old cookie-setting (from what, 25 years ago?) and the stuff that isn't (like ultrasonic profile matching and other shady stuff) is pretty clearly illegal. So it just needs political will on the national level where the enforcement agencies reside. Max Schrems' cases against facebook have shown time and again that these enforcement agencies are often simply unwilling to do their job, with the Irish one being a particularly bad example. But it is possible to do this.

EDIT

Also, regarding the personal responsibility aspect: we're being tracked by platforms we don't even have a user/customer relationship or any other contract with. Someone uploads a picture of me on facebook, and they build a profile based on that? Uncool. Is that a problem between me and the uploader? Certainly. Does that take responsibility from facebook to not mine that data? Nope.


>For many services there is simply no (online, i.e. practically relevant) alternative, because none of the market players have an incentive to be privacy preserving (think major news outlets) or because the service is not interchangable due to network effects (facebook, twitter etc).

But here's the big question: would these services have even existed in the first place if these laws had been in place? The internet has gained massive popularity because most of the resources on it are free. Look at porn - all of the known sites and run on ads, all the paid sites are essentially unknown. I don't know anybody who uses the latter. Wouldn't Facebook, Twitter, news sites etc have gone the same way if they required you to pay? Just look at what happens when a paywalled article gets posted. Either somebody posts a way to bypass it or a lot of people will never read the article.


Most services existed before, with a variety of models. E.g. Both subscription and ad revenue based free news papers existed long before the internet.

Somehow they managed, without the need to track ones every move.

Would the internet look the same, with the same players and behaviours if these laws existed? No probably not. But I also don't think the internet is in a very desirable state.

Maybe, we would have had more invention and uaee acceptance in different ways to pay and consume services instead of this race to the bottom of cheap ads.


>Most services existed before, with a variety of models. E.g. Both subscription and ad revenue based free news papers existed long before the internet.

Yes, but aren't most of those newspapers essentially partisan politics? Every free newspaper I've seen IRL has been backed by somebody trying to push for politics. From my experience they also tend to not be that informative.

>Maybe, we would have had more invention and uaee acceptance in different ways to pay and consume services instead of this race to the bottom of cheap ads.

The problem is that there is no price equilibrium that will work. People from poor countries can't afford to pay what is a reasonable price for people from rich countries. Poor people in general can't afford to pay. Kids/teenagers can't afford to pay. And this gets complicated even more by the fact that we don't even have payment methods available to everyone. Even if you could afford it, you couldn't pay. Ie I've never owned a credit card in my life and as far as I know I'm not eligible for one. They have a minimum income limit that I fall under.

I'm saying that the ad model is what made the internet so commonplace for a lot of information.


I don't know in detail world wide, but Europe has or had several free daily news papers for commuters piled up in train and bus stations. the ones I know used to be fairly neutral because they just printed what came in from news agencies like Reuters, AP, etc with minor changes (license fee is much higher if you want to print 1:1).

As for informational value. They were good enough to keep up to date what is going on, and then use other sources to dig deeper.

I'm not against ads, in fact in small populations like Switzerland where I'm from, fully subscription funded news papers have never been viable to my knowledge. They were always majority advertisment funded. The subscriptions basically pays for having the news paper printed and delivered, not much more.

But the ads that used to pay the peoples bills were those full pagers, clients payed tens to hundreds of thousand of dolars for a single issue. It still worked reasonably well online on desktop with plenty of screen real estate, but with the raise of mobile prices collapsed.

I worked at one of the largest publishers in Switzerland during that time and saw first hand what got cut during re-org: Fact checkers, specialists, investigative departments, international correspondents, etc. News rooms from supposedly independent news papers in the portfolio got merged. More and more pressure to write article to perform on facebook.

All things that a normal reader will not immediately notice, but that severly affect quality and journalistic integrity long term.

While I think having all information available for everyone worldwide would be amazing, I don't see the current situation sustainable quality wise for anyone beyond the few top percent of market leaders.

I don't know how to solve it either, why I said, maybe we would have come up with different models if we didn't go down the path we did. And a lot of these old organisations have to take a lot of the blame of just not reacting to change for years.

But whoever you point fingers at, as it it stands, we are headed for a lose-lose information wise in my opinion.


I would go the other way with it -- the problem is we don't have a low overhead system for anonymous micropayments, so ads are the only competitive way to offer a service with a very low cost (and thus price) per use.

If there was an easy way to anonymously pay the site the five cents they get from the advertiser without incurring 500% payment processing overhead then would sites even be using advertising?

But then we get to much the same result. What we need isn't new privacy rules, it's to delete the old banking rules that prevent efficient payment systems from operating.


I agree that it would massively help, but I don't think it would solve the entire problem. A lot of my interests online were cultivated as a kid where even a 10 cent charge would've made me click away. I'm sure it wouldn't have been an issue in highly developed countries, but it would've been a limiting factor for me.


So the model that works best for micropayments in most cases is exponentially decreasing prices over time. When the content is new it costs $1, in two months it costs $.50, in two months more it costs $.25, and so on down to zero, at which point it becomes free proof of the quality of your work to get people to pay for the newer stuff.

You can actually make more money using a pricing model like that, because you get to charge $1 to everyone who will pay $1 (everybody wants everything ASAP), but in a few months you still get the quarter from the guy who would only pay a quarter. And having a large volume of free old works to show the world you can produce good material is how you get new customers.

Which also solves the problem for people without money. (This is, incidentally, how copyright was originally intended to work. Screw you again, Mickey Mouse.)


Sure, this pricing model might work, but it'll still turn away people compared to right now. News that happened a few months ago isn't interesting anymore. Most people aren't going to read that. This essentially means that the section of people that can't afford the $1 price tag won't read your articles, unless they're researching some story later.

A lot of what we have online just isn't viable on pricing models like this. The main divide you'll see is likely by country due to wealth differences.


It's too late. You can't start to charge for things you used to give away for free. Plus with the internet, there's always an alternative, you just haven't found it yet. So the moment you run into a paywall you will turn around and start searching for the same thing you were just looking for but for free.


> It's too late.

What do you say we try it and find out?

> So the moment you run into a paywall you will turn around and start searching for the same thing you were just looking for but for free.

Because the existing paywalls are some nonsense where you have to give them your home address and sign up to pay a recurring monthly fee which you know is going to be a bear to cancel and costs dollars rather than cents, whereas what it ought to be is a browser plugin that just pays them automatically when you visit the site as long as the amount is below your threshold (with a circuit breaker that requires you to manually approve if you get charged more than like $5 over the course of an hour).


People have been talking about this idea for maybe 20 years (that I can recall). I'm also pretty sure there've been numerous attempts at implementations because really it's not that complicated a thing to make.

None of them caught on though because when it comes down to it, most sites that successfully make money can probably make more money from advertising than they can from this microtransaction system so there's no incentive for them to adopt it.


None of them caught on because the regulatory environment makes it effectively impossible to implement efficiently and all of the attempts had to make fatal compromises in order to comply which made them useless.


The payment processing overhead has nothing to do with rules and everything to do with Mastercard and Visa having decided to not compete on fees.


Then why don't you go into competition with them and undercut them on fees, if the rules make it so easy? Why doesn't anybody?


Barriers of entry exist and have nothing to do with rules.


That's a weird argument to make. Maybe the businesses ("services") would have not existed with these laws in existence already. The question is would the world be a better place without them. I don't think the answer to that question is so straight forward.

We as society implement laws to prevent undesirable behaviour all the time. You can certainly argue that the law against robbing banks for example has prevented business innovation around bank robbery, but I believe this is a desirable outcome, and I think most would argee.


I think the answer is pretty straightforward to me. If the internet hadn't been what it was then I probably couldn't even communicate with you, because I would never have picked up enough English to do so. This is just one effect of so much of the internet being freely accessible. If payment had been required I probably would've stuck to my native language sites, if I had used the internet/computers at all.

I think the internet providing so much information for free is what made it so amazing. I understand that it's not really free, but it's at no monetary cost to the user. The moment you slap a monetary cost on it you create a disincentive for the user to engage with the service.

I do understand though that nothing is free. We are probably going to pay a large price as a society for it.


It can easily be argued that a lot of these services are a net negative and it would have been better for everyone if they never got off the ground in the first place.


I'm okay with killing everything currently online (given backups, of course) in exchange for a sensible micropayment/nanopayment architecture.

Ads must die.


If Facebook etc. wouldn't exists if users knew their prices – ad-infested websites require everyone to pay because companies recoup the cost of advertising by charging higher prices – then by the logic of a market economy they shouldn't exist.


I have the complete opposite view. Companies provably can't be trusted on anything touching privacy, and individuals don't have any power to act on it. Governmental oversight and bureaucracy is the only tool that works for a problem like this.

It's a proven, efficient, and universally approved way to already enforce food, fire, travel safety and so on. It's only logical that privacy safety follows the same steps. GDPR seems to be a good approach for it, just pending on widespread enforcement of the rogue entities that aren't yet following the law.


But privacy isn't like food, fire or travel safety. Even assuming absolute cynicism it isn't very exploitable for them to use it for bad purposes.

I liken the difference between actors to a housecat and a large dog with a lamb. The cat at worst could give some scratches but at worst would probably just annoy the lamb jumping into its wool and kneading it. The dog ideally would look after tbe lamb but could also inflict serious bites or even rip out its throat if it wants. Both may want the meat but only one has the ability to kill it to get what it wants. When it comes to unknowns as the lamb I would go with the cat as opposed to the dog out of sheer distrust.


I'll actually argue that it very much is.

First, there's the historic precedent of collected information eventually making it into the wrong hands. The Preussian "pink lists" are a classic example, but essentially everything that ended up as PRISM can be taken as a more modern example.

And yes, putting power into government hands so regulate the collection of that data, and then arguing that it's dangerous because of the government might seem a bit contradictory. It's not in my mind. These types of legislation are supposed to disincentivize the collection by private entities after all. Government and intelligence services are (too) close but they are distinct.

And then there's the very real [1] ([2] if you want it more juicy) possibility of corporations targeting individuals for one reason or another directly. Here in the west, this sort of thing would result in your Uber becoming more expensive or unavailable, but imagine being a government critic (or activist against e.g. organized crime) in Brazil right now. All that data going god-knows-where, with the express intent of the collectors to sell it to anynone? Not a great outlook.

[1]: https://en.wikipedia.org/wiki/Greyball [2]: https://news.ycombinator.com/item?id=23529035


I think your allegory falls short: the government isn't in this scenario collecting the data, but rather reining in the data collection of private entities. The end result is less data collected instead of the same data in different hands.


Thank you for expressing so clearly what I was struggling to say. :)


They just have to create judicial precedent a few times and the system can easily hand out an impressive amount of warnings and fines.


they just need to crack down much harder on websites and hand out big fines for dark patterns until the websites switch to sane defaults.

It's just a question of how much the companies in question believe that the EU is going to come after them. Once the cost calculus shifts to being on the safe side it'd quickly turn into a norm, but it requires showing some teeth.


It's already happening, but slowly, slowly. The various agencies need time to get their act together, and have started with the most egregious excesses. It seems rather unlikely at this point that consent forms that apply inappropriate pressure - explicitly called out in the GDPR as invalid - will somehow escape enforcement. I'd expect invalid cookie banners to be on the chopping block sometime fairly soon.

Additionally, the risks to advertisers and websites are quite large, which I'm not sure they fully appreciate (unless I'm misunderstanding something here?) - it's not that the consent form is illegal, after all - perfectly legal to have a confusing consent form. Rather, it's that all the personally identifying information thus collected is illegal acquired and held (and it's hard to argue the violation wasn't intentional, to boot!), and the fines for that can be quite large, and can be applied retroactively to whenever the GDPR came into force. Rules always get stretched, but specifically in this way sounds pretty unwise (unless they're cynically trying to have some subsidiary go bankrupt or otherwise encapsulate the risk).

With any luck, the GDPR norms on this front will become global norms, but it's too early to tell.


I'm pretty sure GDPR states that it must be opt-in in a non deceiving manner.


Exactly; that's the point. An opt in that is coercive is not a valid grounds for holding personal data, ergo, that data is held illegally and subject to enforcement by a data-protection authority. Doesn't matter if everyone clicked yes.

A coercive opt-in isn't so much illegal; it's simply void. Having a coercive opt-in would be fine yet weird (as I understand it) if you then proceeded to only retain and process personal information to the extent you would be permitted without the opt-in. (IANAL, and only as far as the GDPR is concerned, perhaps if it's misleading enough that violates some fraud statutes somewhere, but that's a different issue).


I can only speak for myself, but actually I go to the detailed cookie settings and unselect everything I'm not ok with. In particular, I unselect ga and Fb pixel because I believe the habit of collecting visit data at an all-seeing central site isn't worth it, and actually is the major characteristic of a dystopian future that hacker culture has always been opposed to. If the consent settings are rubbish, I don't bother and leave the site; OTOH, if the settings are reasonable, I usually accept optimization and some analytic cookies.

I can't stress enough how much of a game changer that is, by revealing the amount of third-party trackers on websites (in the order of up to 500 on a single site) alone.

So I guess GDPR works for me. There's a lack of enforcement, though. But that could change; for example, in Germany, bored law firms (eg those not having clients currently), or anybody actually, can print money by starting an "Abmahnwelle" eg. insist on GDPR compliance within a certain period of time, then sue any site for non-GDPR compliance, all the while being entitled for compensation of their expenses if they have a cause.


It may be the German implementation is different in this regard, but IIRC the GDPR did not create a personal right; enforcement is solely at the discretion of the relevant authority. I thought that was intentional; as a way to limit frivolous lawsuits, but I'm no expert by any means - are you sure this is actually possible?


No I'm not very sure, IANAL. I just thought that "Abmahnen", for once, could actually be used for something useful when it was used to bother small sites for violating the German "Impressumspflicht" (duty to include press contact info on sites). I'm also not sure it can be legally excluded for GDPR specifically as it has been a staple of German "Rechtspflege" (procurement of law by private orgs) for a long time. I know there has been a relative recent change in jurisdiction where compensation for "Abmahnen" was denied when it wasn't carried out in good faith, but I don't believe GDPR violators can rely on that one for continuing malpractice. I also believe when GDPR was enacted in 2017, internet companies got an additional two year's period of bringing their sites into compliance.


Sounds like it's actually unclear: https://www.datenschutz.org/dsgvo-abmahnung/

Sounds to me like it's not in general permitted (but with a huge exemption), but is possibly permitted to the extent that failure to comply with the GDPR constitutes unfair competition. So that means you can't simply use the "Abmahnen" procedure to enforce privacy rights, but rather need to demonstrate you're a market competitor and that it's relevant to your competitive position. Edit: no, I think I misread- that may be a possible conclusion but it's just not clear.

IANAL and all that.


> Which is why GDPR, although theoretically a good idea, is pretty much useless in practice.

This is very much not true. GDPR isn't restricted to regulating tracking on websites. It also restricts and regulates what companies can do with the customer data they are in possession of. Through my day job I constantly interact with large enterprises (Fortune XXXX) that have vast amounts of personal data through their regular operations (banks, telcos, car manufacturers, airlines, insurance companies and the likes). Nearly without exception they go to great lengths to ensure the data is managed correctly, not used for purposes the customer hasn't explicitly consented to etc. This is as a direct result of the GDPR.


So apparently we (EU) have the right to rip a new one to any big-ass company that is too mucho and gives zero shit about our Right to Privacy. I think that this got you downvoted.

I also do my best to avoid tracking (Firefox add-ons, Hosts file). But up until now I had no leverage against any companies. Now they ought to be afraid when they play dirty.


Do seem to have a lot of downvotes, yes!

GDPR is about a lot more than the tracking stuff though, it is about the personal data companies hold and are responsible for, your permission to request it, the risk of fines if they don't comply. Whatever dark patterns they use for tracking logic, they are still bound to use best practice security on the personally-identifiable-information they may hold and may be fined if they do not. That is what GDPR is mostly about, as I see it.


All that's required is for the regulators to start holding companies using these dark patterns to account. These things aren't GDPR compliant by any reasonable interpretation, and these companies are basically trying their luck to see what they can get away with. If fines start coming to them they'll change their tune pretty quick.


They are so blatantly non-compliant I wonder if at least some companies are actually trying to discredit the GDPR and the idea of regulation in general by annoying their users while blaming the GDPR.


I am sure that's at least part of the thinking.

And judging from comments here on HN, it seems to be working. Just two post above yours there's a comment stating that any government solution cannot fix the tracking issue, and it can only be addressed by a company or an individual


FWIW, I didn't downvote you for this.

And I just upvoted for this piece of comedy:

> So apparently if you criticize gov/bureaucrats on HN you get downvoted.


> is pretty much useless in practice

It's useless because only a few people care enough to report these violations. If anything we need a campaign to get people to start reporting sites which violate the GDPR requirement of an informed opt-in.


Why would anyone consent to be tracked if given a real choice? What are the benefits? The 9% look like an error.


I know people who just automatically click yes. I don't think they ever even cared to read what is says. They just have the habit to click yes to every prompt to "make sure it works"


Decades of bad UI design have trained people to click away these kinds of things without reading them. Even people like you or I who should otherwise know better frequently do it.


I wonder what percentage would remain if the no-consent button was the primary button (filled button, saturated colour, bold font, etc.) and the consent button the secondary (outlined or transparent background without a border, regular font, etc.). I can't imagine it going much above 0, and I'd even guess that the ones who do consent intended not to consent but are expecting it to be the de-emphasised button.


He switched the yes/no button position... they would have automatically click "no" instead...


"What are the benefits? "

? That the site exists in the first place ?

Why is it so hard for people to do business math and these conversations on HN never have anything to do with material reality of the parties involved?

It's understandable that we all want something for free - that's easy - but it's not understandable that we can't grasp how revenue is used to pay journalists etc.. We know they're all on the edge of going out of business and that thousands of news organisations are gone.

Whatever our 'personal cost' is, targeting ads definitely works, it makes money.

So that's the 'benefit' - you get people doing labour for you which otherwise, you'd have to pay for.


Needing ad revenue and needing to track visitors are different things.

Structurally obviously we are now in a situation where “well targeted ads” can generate some revenue, and “well behaved ads” barely can. The future has to be one where dumber ads pay more because tracking isn’t technically possible and/or illegal.


> The future has to be one where dumber ads pay more because tracking isn’t technically possible and/or illegal.

The other option is that everything ad supported goes out of business because the ROI on dumb ads is negative.

I don't want to sound like I know this for a fact, but I'd just like to point out that the future doesn't "have" to be anything. There are always multiple options.


> The other option is that everything ad supported goes out of business because the ROI on dumb ads is negative.

Yes by “has to be” I was expressing a personal desire and prediction, nothing else.

It’s probably not one or the other but somewhere in between.


That's fair. I think I was tired when I read your original comment.

You have a nice day.


We have a past where dumb ads used to work reasonably well, and could sustain a huge diversity of publications of all sizes.

There may be some reason why this can't work again, but all of the reasons I can come up with point to a higher ROI for those ads, not lower.


"The future has to be one where dumber ads pay more because tracking isn’t technically possible and/or illegal."

So you're inching closer to the reality. Read your statement again though: where is this 'magical ad tech' that enables newsies to make a decent dolllar?

It's non existent.

Ergo, they are out of business and you get severely limited content.

So, either people can chose to share some basic information, pay, or get very little in return - that's the current 'business math'.

GDPR fails to take into consideration that math unfortunately.

A more progressive solution would be to create solutions not just legislation that doesn't actually change the dynamic.


You mean we are going to see proportionally more content from people who actually know what they are writing about rather than professional clickbait writers? Doesn't seem so bleak to me.


I am afraid the incentives are we'll see more press releases from sources. Guess who has an interest in paying to publish and host things? Those who have content which makes them look good. Not exclusively of course but certainly a higher proportion. Ironically the clickbait crap makes that sort of shilling less effective akin to how TV ads have less influence with streaming. It displaces the views and thus reduces incentive to produce it.


'Unpaid home hobbyist writers' don't have more credibility than newsrooms, with budgets, copy editors, massive networks of sources, researchers, professional staff support - which costs money.


I thought the 00s proved that embarassingly that they did, as five minutes of fact checking with a search engine proved they had gotten Middle School level facts deeply wrong.


This is a little bit sarcasm, fine but there news is professional business.

Random bloggers generally don't have credibility, they can in some instances check facts and even break big stories and so so in a very credible way, however, there's little incentive and they have no power, because they don't make money and mostly don't have an audience. More importantly, nobody cares about them. If they want to speak to 'The Minister of Defence' about an important event, the Minister will not respond. When the BBC/CBC/CNN needs that interview, the Minister will likely, if there's something on the agenda.

Because they don't have credibility, sources won't trust them which is the most major source of difficult-to-get-at information.

Wikileaks and other teams had to build credibility.

And of course the fact that news rooms cover a massive array of subjects.

Resigning the entirety of news to 'some checks on Google' is glib.


I would rather have the law lay the ground rules of what's acceptable in a society. "Free markets" (not totally free, but free enough) can adapt to the new terrain. IMO, most laws trying to get into the weeds and direct desirable business models end badly. They entrench current models, reward easily gamed metrics but not the intended results, or are written by lobbyists.


Even tracking ads don't allow many sites to make a decent dollar!

But if we assume that tracking ads provide enough revenue, and we make the tracking illegal, once the market settles I bet that non-tracking ads will be reasonably close.


> Ergo they are out of business and you get severely limited content

Yes. That’s a side effect I think needs to be taken into account. I see ad blocking and the GDPR as a way of cleaning out bad actors (and their content), but hopefully also a way to give an advantage to good actors. Like authorities shutting down restaurants that don’t pay their taxes. The selection short term goes down, jobs are affected etc., but it’s unfair competition if they aren’t playing fair.

> where is this 'magical ad tech' that enables newsies to make a decent dolllar? It's non existent.

To be clear I don’t mind if ad blocking and the GDPR in concert work to kill 90% of the 2000-2020 era web in terms of “free content”. Should the survival of online firms, content producers, the adtech industry etc even be a consideration here?

> people can chose to share some basic information

I’m all for that. But without transparency like what the GDPR tries to enforce, people aren’t making an informed decision.

I never consent to anything even on sites I really enjoy and wish would survive, because I don’t trust the chain of actors involved, nor do I understand exactly what I consent to.

But I’d be happy to fill in an extremely detailed survey about everything I’m interested in down to my shoe size, and have that data accessible by any site without even asking me. That together with the browsing context should be more than enough to show targeted ads.

In the end this discussion usually comes down to the question “yes you think subscriptions etc is better but don’t you think others should be free to pay with PII if both sides of the transaction agree? They are adults after all” to which my answer is basically “no”.


I consent to website 'Measurement' cookies - I'm happy for most websites to do statistical analysis on their visitors. I opt out of marketing cookies etc.


The author of the article mentioned a GDPR compliant alternative to google analytics that doesn't require users to give consent for tracking simply by not relying on personally identifiable information.


I was told it would make my shopping experience more targeted. That I would want my materialism overlords to have a detailed accounting of my wants and likes to make consuming oh so much easier.


The same reason I enable telemetry in software I use - I want the system to be optimized for my usage pattern.


Except that tracking here is used for advertisment and not to give you a better service. Would you still accept it ?


Some people would rather have products they are interested in rather than products they aren't interested in


Some people would rather have products relevant to the site they are on rather than products irrelevant to the site they are on


Presumably those are the 90% of people who, given the choice, opt out.


Products that you didn't find yourself by deliberately searching for them are never the products you want.


I would certainly. Have you seen the kind of ads people can get on Youtube for example? There's some crazy bullshit there. I would regularly go to Google Ads preference page and correct their profile of myself (sadly many time that would means that I keep getting the same 2-3 ads because it then become too specific, but it's much better than the alternative). I also advise it to people that complains about ads on Youtube. It's also a great way to show people what Google can infers easily from you using your data. Luckily now Google let you pay to get Youtube without ad, so that's what I do.

If tracking means the website I enjoy get more money out of me, and that I get better ads for it, I'm all for it. For website I don't enjoy, I agree completely, I wouldn't share anything with them, but I would also try to avoid them, so essentially, I do that naturally consent or not.


Google (search) had that kicking-ass feature that it would show you some 20 ads relevant to your query clearly marked on the side bar. It was incredibly useful.

Then they decided to track everybody, so the ads stopped being relevant and were just about stuff you were already looking for and for the pages you have already opened. Useless as it became, people stopped clicking and they had to start the dark patterns of mixing the ads with content, and filling the first page with it.

Now you are saying tracking is improving your experience, but you just said the ads are useless. Why is that? Are the original ads harmful? And you are protecting from that by surrendering extra information?


> Now you are saying tracking is improving your experience, but you just said the ads are useless.

Where did I say that ads are useless? That's kind of scary that you come to that conclusion from my comment.

Is it because I am talking about ads available on Youtube? I hope you don't get theses ads, but there's some crappy ads on there that most people will see, unless Google believe you fit in theses criterias (thus crappy ads some people gets).

Normally I would have suggested you to go find a conspiracy theory channel and then watch what Youtube would give you afterward, but I think Google fixed that. At one point I was interested in watching people debunk flat earth conspiracy theory, and god my whole Youtube experience changed, I stopped doing that quickly.

> Google (search) had that kicking-ass feature that it would show you some 20 ads relevant to your query clearly marked on the side bar. It was incredibly useful.

Incredibly useful for what? The end goal of any ads is to make a sale (it can be extremely long term, but still, it's to get cash out of you). In the past few days I watched videos on theses 3 channels:

- Baumgartner Restoration

This one is about art restoration. I like to watch it on the side while doing something else, or right before sleeping, it's amazing, you should try it. Will I ever buy any art related stuff? Unlikely... Art restoration? Unlikely...

- Mathilda Hogberg

Someone that did a MTF transition. I got her on my suggestion feed, no idea why, but I was interested in the result of a full transition so I decided to watch it and then some others of her video. I'm not considering transition, I don't have her taste, I don't have much related to her. Would I buy anything then in this sphere? Unlikely

- Micarah Tewers

She makes dress and is quite interested into historical dresses. She is funny, have a quirky personnality, is quite ressourceful, and what made me enjoy her content is a rant she did about historical dresses on a recent winner of a Best costume award. I'm not considering buying a dress, nor am I interested in making one (I am male just in case it make it more relevant to you).

So in total, for theses 3 channels, ads would have been a waste on me considering the content. I watched at least 3-4 hours in between all 3 channels and they deserve to be paid for producing that content that I enjoyed. Luckily I am a Youtube Red subscriber, so they'll get paid whatever happen, but if I wasn't, then they would have gotten nothing out of my views.

I should receive a Ryzen 5 3600 tomorrow (not an XT sadly, but $/performance a 3600 is better). Google knew it, perfectly well believe me, my suggestions confirm it ;). If any reseller had any ads over one of theses videos, I would have most likely brought it from them. That's what happened when I bought my first netbook in 2010, Google Ads on Gmail (never mentioned it on any email, Google got it from a previous search). So if I had an ad about that processor or any related component, well maybe Baumgartner Restoration, Mathilda Hogberg or Micarah Tewers would have gotten a bit more from ads that I viewed ;).

> Useless as it became, people stopped clicking

That's a big assumption. Google is a trillion dollar industry and that's not from Youtube, it's all from ads. You don't become a trillion dollar industry by doing worst, but by doing more. Why would you then assume it made less money and not that it was simply a way to make more money? It's crazier considering then that you consider that their next step was a solution to make more money... Both tracking and mixing ads with result were to make more money.

> Are the original ads harmful?

What? Harmful? I'm really interested in your thought process, third time now that I ask myself how you get to a conclusion, in 3 paragraphs. The original ads were not as useful. I don't care about dress, please don't show theses ads to me, you will lose screen estate and you are assured to get no click from me, you are losing money by providing me a service that I can't pay for. Show me what I may be interested in buying, and that's most likely be technologically related, even on a art restoration channel.

> And you are protecting from that by surrendering extra information?

I am protecting? Like myself? No I protect the service and his likeliness to get money out of me. That extra information, I give it voluntarily already. If I search on Google, I am telling Google that I am interested in theses things. It's not extra information.


The 9% is probably people trying to opt out by selecting the minimum amount of cookies and hitting the big green "Accept all cookies" button instead of the small gray "save changes" button.


On this case, the form is very simple and clear. So I don't think you can easily brush them off. It could be affected by some people really not caring/understanding. It could also be affected by some people wanting to support the website.


I honestly kind of like getting instagram ads relevant to my interests. Right now I'm getting a lot of solar charge controllers.


But that's not how advertising normally works.

They don't want peanuts from solar charge controller companies, they want big money from say political campaigns whose ads or news articles will read "<opposing-politician> wants to defund solar" or "<our-politician> loves solar".


Consent to cookies? No big deal, my browser will erase them as soon as I close the window (and I don't reuse windows), so I click on what is easier.

They never ask for consent to track on the general sense. So I use extensions to stop that.


An interesting paper here on the influence of deliberate dark patterns in these consent boxes.

https://arxiv.org/pdf/2001.02479.pdf


Thanks! This is the first proper breakdown I've seen.


The latest trend in dark UI GDPR patterns is presenting an endless list of things that will track you.

It starts with "allow necessary cookies" enabled, and the rest disabled, and presents 3 buttons, a 2 small gray ones and a big green one, and unless you're REALLY careful, you'll end up accepting all cookies.

The text is something along the lines of "Cancel", "Save Changes", and finally the big green one is "Accept all cookies".

The trick here is that the small gray "save changes" button is actually the one you want, as the "accept all" effectively enables ALL cookies.


The problem is that companies which do that kind of dark patterns are very likely to ignore your settings anyway in the future. Like, for example, Amazon.

What then comes to my mind is the thought: "Well, would I try to do an ongoing delicate business with a well-known notorious asshole and trickster?". Obvious answer is "no".

So, in most cases, I just leave that site.


Anything that refuses to let me view content without setting cookies is ignored here. Sorry, no matter how good your content is, it’s not that good.

Shady sites that use dark patterns are about to go the same route.

And I have privacy badger, uBlock origins, as well as Pi-hole/PFBlockerNG (not at the same location). Our phones have ad blockers as well, though I doubt they’re as effective.


I was quite pleased that the cookie banner example was blocked by uBlock. As it should be.


Quantcast claimed a 91% any consent and 80% total consent rate for a sample point of a more unethical implementation: https://martechtoday.com/quantcast-reports-more-than-90-of-v...


How do you remember the user has opted out of cookies without storing a cookie?

Does anyone have an example of a site with a clear opt-in/out? I'm curious about what's actually stored on my machine in each case. And of course it's not really what's stored locally that's the issue: it's what is stored on their server.


Opting out of tracking is not the same as opting out of cookies. Per the GDPR a site is still allowed to set any cookies required for the site to work, including a "tracking consent cookie".


> I wonder what results you would see for something like yahoo, the daily mail, reddit, or other sites that heavily rely on ad revenue, which attempt to force the user to accept the cookies through non-obvious no buttons, or long processes to opt out of cookies.

I wonder what's the best of shaming/reporting these examples and if an effective way exists[1]. If you have any ideas, sources, anything—please share.

GDPR consent screens are pure dark UX.

Yes, sometimes the information/actions are buried under a dozen of useless screens, but in some cases these changes are fairly subtle, like reversed the 'active' and 'inactive' states for Reject and Accept buttons on mobile, or things glitching, just a little bit. There's an obvious effort put into make things not only complicated, but also buggy.

So _fear, uncertainty and doubt_ all over again.

As someone who worked both on GDPR/Adtech/consent frameworks and privacy I still struggle to understand how some of these things are legal and what can we do to change it.

[1] Digression: Publisher pressure is one potential solution, but a limited one. Some premium publishers don't want to be associated with this kind of creepy UX. But, at the same time until we've completely killed cookie-driven behavioural targeting (and the alternatives, behavioural, not contextual) higher CPMs/ad revenue will keep the other publishers quiet. Ironically, behavioural targeting is mostly bullshit, and CMPs inflated, but that's a different story.


It's not clear to me that the dark UX forms are legal.


It doesn’t seem to matter: https://arxiv.org/abs/2001.02479


Huh? That says that only 11.8% are lawful.

Obviously this is all pending enforcement actions actually taking place, but it's not obvious that will never happen.


Thing is, if someone can show that they were tricked into giving consent when they were not willing to, then you're still in violation of GDPR. So those convoluted forms might not give those companies the legal protection they they hope for.

And yes, I'd assume the "phony consent" rate to be much higher, because in some cases I also cannot find the no button and/or accidentally tap on the huge yes button on my phone when my intention was to click the tiny no link next to it.


I think those are just designed to show some attempt at compliance; muddying the waters of any future regulatory action. Perhaps because their business model is unsustainable without pervasive tracking.


How do you prove you were tricked into giving consent to tracking?


I think it is responsibility of the company to show that meaningful consent was freely given.


Record screen sharing of 50 people that you instruct to withdraw consent. If 5+ of them fail, you have made your point that the UI is too misleading to provide legal compliance with GDPR => they are now liable for high fines


Yes, exactly.

I hoped that he would repeat the experiment with a tricky one, like the horrendous forms served by Quantcast. In those, if you click "Reject all" nothing happens! How is that even allowed boggles my mind.


That's the website fault. The Quantcast form is highly configurable, you can choose to display a "I do not accept" button on the first screen


It really shouldn't be possible not to display that button on the first screen, because opting out should be as easy as opting in. Unless the first screen also doesn't have an 'accept' button, but I've never seen that.


> How is that even allowed boggles my mind.

It is not. But GDPR is sorely lacking enforcement with regard to tracking consent. The last time I checked only 3 cases were brought up (all in spain), and all of those would already have been illegal before GDPR in my non-lawyer opinion.


It's a screw-tightening process. We've only just started, but proper enforcement will surely follow at some point.


It isn't.


> attempt to force the user to accept the cookies through non-obvious no buttons, or long processes to opt out of cookies

If I understand correctly, this sort of trickery is forbidden by the GDPR, but so far no-one has seen any consequences for doing do.


> so far no-one has seen any consequences for doing do.

This is precisely the problem with the GDPR to date.

Last August the ICO (British regulatory body) stated that you can't run Analytics such as Google Analytics without a GDPR standard of consent. They've yet to enforce this despite tens of thousands of non-compliant websites.


Meanwhile, there has been talk of permitting first party analytics cookies without requiring explicit consent for at least as long as the GDPR has been around. IIRC, even the ICO previously indicated support for that position, though I can't immediately find a reference for that now.


> I wonder what results you would see for something like yahoo, the daily mail, reddit, or other sites that heavily rely on ad revenue, which attempt to force the user to accept the cookies through non-obvious no buttons, or long processes to opt out of cookies.

I don't wonder about their numbers, it's obvious their numbers are good for them. What I wonder is how are they getting away with something that is clearly illegal.

As already explained, in properly implemented GDPR survey, it should be equally easy to accept, decline or ignore tracking, with the default being decline if user doesn't give explicit consent.


A lot higher, that’s why they are shady in the first place.


We should have GDPR settings in the browser.


You mean, like a checkbox that sends a `DNT` header set to `1`?

I think both the old cookie law and the GDPR kind of (directly or indirectly) include that case†, and sites know that they don't even need to display the dialog if they receive the header.

† the consent (or rather, intent not to consent) is explicit, and although non-interactive at the site level it was interactive at the browser level until MS defaulted it to `1`. Now I'm wishing it were like those notifications/location/webcam/mic access and the dialogs were required to go through the browser itself.


> I think both the old cookie law and the GDPR kind of (directly or indirectly) include that case†, and sites know that they don't even need to display the dialog if they receive the header.

Then I think that's the best kept secret of the industry.


Technically with GDPR defaulting to 1 is the only correct option. MS were only ahead of the time :)


GDPR affects the server default. A header that's supposed to show user intent still needs to default to blank.

A law enforcing DNT would be good, but honestly it would change the semantics of the header.


Block any third-party cookies, and then block third-party JavaScript alltogether. Problem solved.

Oh, there's nothing like that in the so-called "HTML standard"? Maybe, just maybe, Google being the standard body might have something to do with it, when Apple have been blocking third-party cookies for years now [1], and is in the progress of banning browser APIs that can be used for fingerprinting.

Best of all, this might rollback all those HTML5 APIs that have no business being shipped with browsers, and bring back the web content we want.

[1]: https://webkit.org/blog/10218/full-third-party-cookie-blocki...


We did, and now they're going away. The "Do Not Track" option is being removed from Safari because it is used (together with other fingerprinting techniques) to track individual users.


DNT. Has been sabotaged from the beginning. Default on leads to sites saying "we don't accept DNT".


Most sites already ignore the "do not track" flag from your browser.


The difference is that "do not track" isn't enforced at all.

It would be much better to make a browser-side dialog like it's done with location tracking and desktop notifications, and also provide a checkbox in the settings. Most importantly it would take away control from shady websites to implement it on their shady terms(though admittedly giving that control to Google's browser may not be ideal either).


Reddit doesn't have a consent form.


[flagged]


Any contract should be a meeting of minds, not a cheating. So on the other end of the spectrum from manifestly dumb user is woefully deceitful company. The latter will make the button very small, low contrast, and place it in the middle of a wall of EULA text, and have it reject keyboard focus change so pressing TAB won't light upon it. I think we can blame that company.

The old-school equivalent was small print, and in many jurisdictions there are limitations to how small a font can be used (e.g. no smaller than half the main body size and no smaller than 8 point).


I signed up for a new account on a fitness website yesterday. They track health and food and diet you enter. anyway, during the sign up they had a opt out, I chose to do just that, however the opt out process then took over the screen with a modal window, which gave a loading bar and took about thirty seconds to complete... But guess what... There was a big CANCEL button. I couldn't perform any action during sign up and opt out.

This was one of the worst dark patterns I've seen. PS. The app was MyFitnessPal which i registered on recommendation but that felt so shady.


There's a Germany-based competitor to myfitnesspal called Yazio that has a good privacy policy. Don't know if their database of nutritional information is as good in your location though.


In fairness, when I went to that site as an experiment, it did eventually appear to save my preferences. And in more fairness, go look at the list of cookies and especially under advertising. Yeah, that's a lot fucking bigger than it should be, and a minority allow opting out. But imagine the implementation in your head: you've probably got to make some web call (REST, WebSocket, whatever) for each one of those.

In summary, I could see how that might take a minute or two without a requirement for evil.


I use "LoseIt" since MFP went overboard with having features and not a single UI/UX review.


It really is impressive how many taps it takes me to do anything in MFP. Why are there so many damn menus, I just want to input calories and see a counter. It is so overly complex I just switched back to a legal pad and doing the math by hand, it is faster.


for apps like this why does anyone sign up with their real name, email, and/or demographic info? get a throwaway email and there you go.

i can't remember the last time i used my real info or email.

although there was a case just last week where a site "needed" my phone number. in 2 seconds flat i decided i didn't need that service.

of course without a legit email you can't, or it's much++ harder for account recovery, but i can live with that.


Fundamentally, the browser is the user's agent. Storing cookies, running tracking scripts, etc. should be controlled by the browser. Some browsers may take a strict "block everything" approach, some may be relaxed, and some may harass the user with prompts. Users are free to choose the appropriate browser.

Depending on websites to limit tracking by on their own is very difficult, since it is inherently against many websites' business model. They will keep trying to bypass the rules.


This is nice in theory, except the amount of fragmentation in 2020 is huge.

The average user simply cannot switch browsers.

The likelihood of encountering a website that breaks even switching from Chrome to Firefox is too much for any normal user to want to bother for just these purposes. They'll switch back the minute they find a website that doesn't work in their new browser, if they even get that far.

So unless you're suggesting "Chrome should make it more obvious how to clear cookies automatically and/or not accept them at all" (which seems like quite a UX challenge itself), saying "just tell everyone to switch browsers" just isn't going to work I don't think.

EDIT: to finish the thought -- yes I agree, browsers should provide users with choice, and switching browsers should be available to anyone who is unhappy with the way their browser treats their data, but that can't be the only way -- otherwise, the average user will get left behind.


I agree that switching browsers is too much to ask. But forcing websites to act against their own interests is not going to work either. It's like forcing a tiger to treat a deer with love and compassion. It's just unnatural. Do you seriously think Facebook is ever going to genuinely give users a way to opt out of tracking? Or Google? Never. They will always try to weasel out of all the requirements, always adhere to the letter but go against the spirit of these regulations.

At least browsers don't have that inherent agenda against their own users. There will always be a Firefox which genuinely strives to serve its users, rather than exploit them.


To be fair to chrome it's actually incredibly easy to clear cookies and to specifically block them.

Just click the lock next to the url in the url bar.


> This is nice in theory, except the amount of fragmentation in 2020 is huge.

The GDPR popups is not nice in theory or practice.


I think everyone agrees with this, and is now at the "what are we going to do about it" phase.


There is only so much that a browser can do. It is fairly normal for some cookie values to be necessary for a website to function. Browsers should not be expected to differentiate these "necessary" values from values that support other features.


Browsers do provide these controls to users. You don't have to switch between them. More fine grained customization is possible with extensions, assuming you can trust extensions authors and the browser you're using supports them.


I remember reading a recent article claiming chrome tracks users even in incognito mode. I doubt those buttons to clear cookies and data do anything that hampers Google's ability to track you.


Surprised no one has mentioned the systems that, if opted out of, redirect users to a 'privacy policy' page and won't allow access to content without opting in.

Or the sites that don't bother with compliance and just show a message to the effect of 'this site operates under a jurisdiction that may have different privacy laws to your country' and leaves it at that.


Those very much falls in the shady part.

The first option, redirect, is not GDPR-compliant, because then the "consent" cannot be considered freely given, and thus is not valid

The second option is really borderline, and could work out for a US-only news website, for example (arguing it doesn't cater to European residents), but would be non-compliant for a business which knowingly serve European residents.


> The first option, redirect, is not GDPR-compliant, because then the "consent" cannot be considered freely given, and thus is not valid.

I don't quite understand the reasoning on that one. In Europe, and pretty much everywhere else, there are a bazillion interactions every day in the form of one party offering to provide some good or service only if the other party agrees to something.

For example, the grocery store will only give me food if I agree to let them charge my credit card.

Why is consent considered freely given when I give someone money for a good or service because if I do not do so they will not provide the good or service, but not freely given when I click "agree" on a privacy policy disclosure because if I do not do so they will not provide the good or service?


Basically, GDPR forbids bartering of PII/tracking information with goods and services.

Why? Presumably because most users don't see the real cost of giving away their personal data (either they never recognize the cost, or see it too late).

To make sure this is held up, the GDPR uses some tools; one is that consent is freely given, the other is ban on tie-in sales: you cannot demand PII from users that isn't necessary for the service you provide.

If you provide news or stories as a service, you cannot demand location data from your users, because you can provide the service without that.


So, obviously, I Am Not a Lawyer, but this seems easy to explain: when you go to the grocery store and buy something, you tacitly enter into a contract (which is exchanging money for food), where both parties agree.

For someone to use your personal information, they need to have one of the 6 legal basis to do so under the GDPR. One of those legal basis is to have a contract with you (in which case, the contract will define what's allowed and what's not). Another of those legal basis is "consent", which is the one being the most discussed, as it is generally the only one ads can hope to use, so let's ignore the 4 others (legitimate interest, public interest, vital interest, legal requirement, you can easily see why trackers for ads targeting don't fit any of those).

It is generally admitted (or at least I think it is, feel free to dig around for a better source for or against that assertion) that visiting a site is not entering into a contract (probably because a contract has to be fair, and giving up personal information without your knowledge just by visiting a site isn't actually a fair? I don't know that, IANAL).

That means the only legal basis ads companies (or the site that host them) have to use your personal data is to have your consent, which is strictly defined in the GDPR (and other posters have discussed how this definition is mostly ignored)


Under GDPR consent can’t be “freely given” when it’s bundled as a condition of service unless the consent they’re asking for is necessary in order to perform the service. To use your example:

The grocery store doesn’t need to ask if you consent to paying for an apple because if you didn’t consent there wouldn’t be any transaction to perform.

Now if you paid for your apple and the cashier said okay hand over your phone so I can poke around a bit because there’s some fine print that says by nature of walking through the front doors you agree to allow the store to look through your phone. Did you consent to that? Of course not.

Consent wasn’t “freely given” because the store is requiring you disclose information (the contents of your phone) as a condition of service (you can’t even walk through the door without “consenting” let alone make a purchase) and that information isn’t necessary in order for the store to complete the transaction.

GDPR says they have to ask you first (usually in the form of a giant irritating banner as soon as you walk in the door) and that if you say no they have to let you buy your apple anyway.


> GDPR says they have to ask you first (usually in the form of a giant irritating banner as soon as you walk in the door) and that if you say no they have to let you buy your apple anyway.

Can you link to source for this (the part that says you can't deny access)?


Perhaps I’ve oversimplified a bit. GDPR has a paragraph that’s often called the “coupling prohibition” - Article 7(4):

> When assessing whether consent is freely given, utmost account shall be taken of whether, inter alia, the performance of a contract, including the provision of a service, is conditional on consent to the processing of personal data that is not necessary for the performance of that contract.

It somehow says a whole lot and not much at the same time. Since every member state and everyone who has to comply needs to interpret what GDPR means there are various “recitals” that offer official guidance. One of those is Recital 42 - Burden of Proof and Requirements for Consent[1] which says:

> Consent should not be regarded as freely given if the data subject has no genuine or free choice or is unable to refuse or withdraw consent without detriment.

So a person must be able to refuse consent “without detriment” and the company is meant to provide an equivalent, but necessarily identical, service to those who do not consent.

What that means exactly is, of course, the subject of much litigation. For example is it a “detriment” to require a subscription fee to those who do not consent to information sharing? So far one ruling (Austria) has said no, provided the fee is reasonable while another (UK) has said yes, the equivalent service must also be free.

As far as how the coupling prohibition should or will apply to a company like facebook - where harvesting user data is the entire business model - I think that is yet to be clearly determined. As are most of the nuances and technicalities in GDPR.

Edit: I should also note that consent is just one avenue to legally allow a company to process user data under GDPR. It’s not the only avenue.

[1]https://gdpr-info.eu/recitals/no-42/


This really shouldn't be left to interpretation, both Article 7(4) and Recital 42 define what is "freely given consent" and in no way limits the actions i can take as a site owner. It is clear that a "cookie wall" isn't considered a "freely given consent" so you can't process personal data based on that.


Correct you can’t process personal data based on it. And the underlying implication is that none of the consent you’ve obtained via a cookie wall is valid because you haven’t given any users the opportunity to “refuse without detriment” (because their options are to consent or see nothing). So the information you’re processing on behalf of users who clicked “I agree” - even the users who do in fact knowingly and willingly agree to the information processing - might be lacking a legal basis.


I don't see how the first option is not GDPR-compliant. If the privacy policy page doesn't process personal data there is no consent needed.


I'm going to quote Article 7.4. of the GPDR here:

"When assessing whether consent is freely given, utmost account shall be taken of whether, inter alia, the performance of a contract, including the provision of a service, is conditional on consent to the processing of personal data that is not necessary for the performance of that contract."

Preventing you from seeing the page you request if you do not consent seems a lot like the provision of a service conditional on consent. Obviously, that's for a judge to decide, but the law seems very clear from my perspective (IANAL, that's not legal advice).


>Preventing you from seeing the page you request if you do not consent seems a lot like the provision of a service conditional on consent.

You are correct. However, the prohibition against "a service conditional on consent" can be overcome (inter alia means "among other things") and one of the ways it can be overcome is if the user is given a choice to instead select "a consent-free equivalent service for a reasonable remuneration."[1]

This is the result of a ruling by the Austrian Data Protection Authority (DPA) evaluating an case in which users could access an Austrian newspaper by either (a) consenting to personalized advertising or (b) paying a subscription fee of 6 Euro / month.

The DPA found that these options were not considered a "significant detriment" to users (i.e. it was not considered coercive) and was therefore valid.

It's worth noting that the UK found otherwise, saying that "for the user to have a genuine choice, a consent-free alternative would have to be offered free of charge."[2]

-----

[1] Austrian Data Protection Authority (case no. DSB-D122.931/0003-DSB/2018)

[2] Validity of consent coupled with free online services - Chair of EDPB opens a path https://www.lexology.com/library/detail.aspx?g=5125ca7c-84fa...


I'm no fan of advertising but The UK ruling seems absurd to me, what other form of business is expected to provide their service for free?


I have read this and (IANAL) view this as a definition for what is "freely given consent". If you force users to give consent in order to use service, it isn't freely given consent. It doesn't say that you must provide the service.


We'll have to agree to disagree. Perhaps someday a judge will rule on this, but I'm not aware of any case on this particular point yet.


I think the heart of the matter is "not necessary for the performance of that contract".

Privacy advocates argue that it's not necessary to track people to perform the purpose of a site (like, displaying news items).

Publishers argue that it's necessary to track and display ads in order for the business to be sustainable, without which they cannot continue serving news (+ ads).

I guess we'll have to wait and see what the courts have to say about this.

Maybe publishers have will have to show how much their revenue suffers if they stop tracking.

Another consideration is the demand for "privacy by design and default" (https://gdpr-info.eu/art-25-gdpr/). It might be hard to argue that business built on tracking users fulfills the criteria for that.


I might not be understanding your point but:

- consent requires, among other things, that permission be given freely;[1]

- so, if coercion is involved then consent does not exist;[2]

- and, preventing user access to content unless that user agrees to be tracked is likely considered to be coercive.

Therefore if a user grants permission to be tracked only in order to gain access to that site's content, that granted permission would not be considered consensual because that permission was not given freely.

(the above is not legal advice but I do have a law degree; I also work for a NGO that produces apps that teach people about consent)

----- [1]GDPR Article 4(11) [2]GDPR Recital 42


(I thought I could still edit the above response but it looks like time has expired)

I wanted to add that the freely given requirement can be very granular / fact intensive. This later comment shows one way (renumeration) the above can be accomplished without coercion: https://news.ycombinator.com/item?id=23762945


> Or the sites that don't bother with compliance and just show a message to the effect of 'this site operates under a jurisdiction that may have different privacy laws to your country' and leaves it at that.

That's potentially but not necessarily compliant. To a large degree, it depends on the intent of the website's data controller.

* GDPR Art 3(2) discusses the territorial scope of data controllers that are not in the EU. Their data processing falls under the GDPR if they are offering services to people in the EU.

* GDPR Recital 23 discusses potential factors that indicate an offer. Blocking EU visitors is not necessary: “Whereas the mere accessibility of the controller’s, processor’s or an intermediary’s website in the Union, of an email address or of other contact details, or the use of a language generally used in the third country where the controller is established, is insufficient to ascertain such intention, factors such as the use of a language or a currency generally used in one or more Member States with the possibility of ordering goods and services in that other language, or the mentioning of customers or users who are in the Union, may make it apparent that the controller envisages offering goods or services to data subjects in the Union.

* The EDPB has issued further guidance on the territorial scope. In their guidelines 3/2018 [1] the spend a lot of ink on discussing this “targeting criterion”, and provide some clear-cut examples. Of course, that falls short of actually interesting examples of edge cases :)

[1]: https://edpb.europa.eu/our-work-tools/our-documents/riktlinj...


I would say that it means 9% of visitors "click away" any banner they see without reading it.

And usually clicking "yes" will do away the banner in the most hassle free way.

I am surprised the number is so low.

I surely click "Yes" on any banner immediately without reading it.

My guess is that the number was so low because his banner (by its simplicity) looked unusual enough that many people read it.

With the typical spammy pseudo consent banner, the number would probably be much higher. Even without dark patterns. People are just trained to click yes.


I'd be interesting to perform another test, with "No" and "Yes" buttons swapped (and the banner text updated accordingly).


well, that is the article's point. i mean one of it's 2 main points. (the other being itself an ad/SEO).


The article says mobile users are more likely to engage with the banner. Rightly so, cause it takes up precious screen real estate. What are some ways I can protect myself more when browsing the web through my phone?


Depends on the phone, but generally you can't protect yourself as well as on PC, because phones (sadly) are locked down devices running proprietary software. Best you can do is probably DNS-level blocking.


Firefox on Android with ublock works just fine.


Additionally, set Firefox to clear all data when selecting Quit from the menu. It's then easy enough to do a reset of any lingering bits every now and then.


I believe Firefox focus on iOS removes many banners or at least makes it so a consent to tracking doesn’t effectively do anything, by disabling tracking/making it easy to wipe.


if it's an android, I'm very happy with firefox + ublock.


I helped implement a consent tool in our SaaS product that has users from a lot of different regions, many of which are more technical/"developer persona", and the tool we use isn't shady (it depends on the region, but for example in Germany it is opt-in by default and the options are presented immediately rather than behind a modal).

During the initial stages we had worried that consent rates would be low, and while Germany was "low" at around 70-75%, we found that overall consent rates across all countries were 90%+ with exceptions for Germany, France, and a couple other European countries. This was consistent across both the application and our marketing website, and across 10s of 1000s of users.

So while I don't doubt that the author ran an experiment and got these results, I strongly disagree that you will get "only 9%". Specifically, these lines in the blog post:

> And if you give them an easy way to ignore your banner or to say no to be tracked, most of them will simply do that.

> Most web users will simply select “no to tracking” once in their browser and the browser will block all the trackers for them as they surf the web.

is not correct, at least from my experience, and it absolutely can't be used as a blanket statement like this. Of course my anecdote is, well, anecdotal, so take it with a grain of salt, but I don't think it's fair to say that users will simply reject it no matter what, and it really depends on both context and trust levels.


I would say of 9% who gave consent the vast majority gave their consent through error, deceit or because they don't know what tracking really means and they wanted the service anyway.

Why not outright outlaw opt-out tracking and be done with this silly state of things?

There are legitimate uses of tracking and any company would still be able to provide it for the users, but you would have to have legally binding contract and opt in for the service.


Tracking already has to be opt-in according the GDPR. The issue is that the GDPR is not enforced so sites get away with non-compliant consent prompts.


The issue here simply seems like enforcement needs to be formalized to be as cost-effective and simple as possible. It starts with fines, ramps up to full investigations if bad faith is shown. Turns out when you write laws they also need to be enforced.


Ironically, if you disable cookies in your browser you won't be able to see the images in this blog post.


Works for me on Firefox mobile with cookies blocked in the settings.


Works for me in Safari with cookies blocked too.


Not in my case (Chromium, uMatrix block cookies)


I'm one of those strange people who will happily opt in to your Measurement cookies and opt out of the other stuff.


Unless they do browser fingerprinting, the number will be overinflated by all the people like me who clear their cookies regularly, if not on every browser session, and are therefore re-asked consent on every single visit.


I'd imagine the people who do that to be a very small percentage.


I know many people who only browse in incognito mode.


I do, too, but fingerprinting still works for these people. In fact, using Incognito Mode is part of the fingerprint.


that should not matter, the obvious choice would be to ask every time until they say yes. It is also the simpler choice, if the user opts out of all tracking then you cannot track their choice.


1. opt out is not gdpr conform. 2. saving a cookie no_consent=true does not require consent by the user. that's the type of cookie easily classified as functional/necessary


if the users opts out of your opt-in pop up, then :)

> saving a cookie no_consent=true does not require consent by the user. that's the type of cookie easily classified as functional/necessary

It looks like an unusual choice for the website. Assuming that the intention of the website is to use all the trackers that it is asking to use going out of their way not to ask the user to be tracked seems counterproductive.

Sure the website is also interested in not annoying the users to death, but then it sounds a better idea to offer less invasive prompts and more informed choices than a blanket yes/no. Here again I feel like gitlab made fanstastic choices for their prompt.


I tried your website Metomic and I really like the preview example! The preview, overlay, etc. is very well done.


I hate ads as much as everybody else, and this reads to me like "given the choice only 9% of people would pay for their meal", should we forbid charging for food then? Probably not.

As sad as the current ads-powered internet is becoming I haven't seen any promising viable alternative, and sure non-tracking-based ads is not the same thing as having no ads, but it significantly moves the needle in that same direction in terms on revenue.


A meal paid for by money is a transaction where both parties understand what’s being exchanged. Not paying for it criminal.

A viable alternative: show non tracking ads and see if it keeps the lights on, otherwise shut down?


> A viable alternative: show non tracking ads and see if it keeps the lights on, otherwise shut down?

I'm not sure that'd be viable in general, like what would the economic impact globally if all websites that barely can manage to keep the lights on today because of effective ads just disappeared? Maybe the impact could be even positive, like by disallowing politically very-targeted ads, among all other kinds of targeted ads, maybe we can prevent idiots from being elected, but if I had to guess I'd say I wouldn't like to see those websites disappear.

Like imagine if YouTube disappeared because it can't make enough money to host all that staggering amount of content.


I wouldn't miss it. If they can find a viable non-tracking business model, I'd imagine most would. People used to pay for newspapers so there's pretty good precedent.


I have friends who a few years ago switched away from WhatsApp because it was charging 1 buck per year, and for that amount of money they gave you instant global unlimited text messaging, and those same people would have happily paid 10x that for a one-time meal.

I think you are grossly overestimating how much the average person is willing to pay for a digital service.

And I don't think newspapers are a good precedent giving how nobody buys newspapers anymore, compared to pre-internet levels at least.


If you don't miss it, then why bother forcing them to do this? Just don't visit their site?


That metaphor falls apart on multiple levels. Paying for a meal is a transaction that both parties explicitly agree to. Tracking in its current state is mostly done without the consumers awareness/consent.

Tracking is also not required for advertising, it only (supposedly) increases the effectiveness of it at the expense of the user. A more apt metaphor would be "restaurants are using cheap toxic chemicals to increase their revenue".


I agree with you on the non-transactional nature of advertisement, as far as viewers are concerned, the metaphor is perhaps only effect at illustrating how given the choice most people would prefer to have free stuff at the cost of everything else (e.g. free meals or free content and services).

Tracking is not strictly necessary for having ads, but for having _effective_ ads I would argue at least in some cases it makes all the difference. For example I don't care one bit about cosmetics, if a cosmetics company advertises to me it's going to waste its money, and if the company has an increased chance of wasting its money than the ad space is worth less, meaning the website owner gets paid less, meaning that eventually after some threshold is reached offering the content or services via the ads model becomes unsustainable.


Advertising has been effectively done without tracking for decades. The trick is to do filtering at the content level, rather than at the consumer level. Odds are if you don't care about cosmetics, you won't watch videos about cosmetics or visit websites talking about cosmetics. Co-locating advertisements with relevant content performs this filtering in an effective manner without requiring users to be tracked.

Perhaps tracking users is more effective, perhaps it is not. Regardless, boundaries need to be set to enforce ethical behaviour if the unethical behavior is more profitable. If that means certain companies or business models won't survive: so be it. There is always a trade-off to be made between ethics and business. After all, the exact same arguments could be made against outlawing child labor or any of the past atrocities businesses committed in the name of profit. Businesses that cannot survive ethically will die to make room for businesses that can.


It is possible to show content based ads rather than ads based on your tracking. It is perfectly reasonable for a website about motorcycles to show ads for motorcycle products, and this could be done without tracking the user.

I don't want that one piece of furniture I (or my SO using my computer) looked at on Wayfair to follow me to the motorcycle website. I definitely don't need an ad empire following all of my activity on the internet and knowing that I'm a 35 year old man that wants to buy a motorcycle, needs a plumber, wears size 32 pants, and looks at green couches among all the other data they are collecting.


Sure but that doesn't work for everything, should Rockstar Games pay 100x what it currently pays for advertisement for advertising GTA DLCs or whatever to people who don't care about gaming? Should perhaps websites like YouTube not exist because people don't want to pay for stuff and the business is not sustainable via non-targeted advertisements (if that's the case)?


Yes it does work for everything- a website can label its content with keywords and advertisers can choose to advertise on sites with specific types of content. So IGN signs up for an ad network and says that their content is about "video games." Then Rockstar says to the ad network, "run my ads on sites about video games." A person reading IGN is clearly interested in video games and we don't have to do any tracking to figure that out. They could even go so far as to show Playstation ads on articles about Playstation games and Xbox ads on articles about Xbox games. The ad can be targeted by the content a person is looking at, not where they've been in the past. If anything, a video game ad on a video game website is more effective than showing me a cosmetics ad on a video game website.

So companies can still have effective ads on relevant sites, ad networks can still facilitate the in-between, and users don't have to be tracked. Advertisers already use Google AdWords to show ads to people searching for specific keywords, this is just an extension to show ads to websites with specific keywords. Websites are already working with keywords for SEO, it's not even extra work for them.


I wonder how much of this is the Lizardman constant. Do people even exist who actually volunteer to be tracked in exchange for nothing?


No, the entire transaction relies on coercion and/or deceit. If companies aren't going to listen, maybe it's easier to just ban any sort of unnecessary tracking across the board in order to cut down on enforcement costs, so it stops becoming a process and more of a draconian whip. We tried the carrot, it's time for the stick!


If I need to figure out what they’re actually asking I’m out. This article makes it clear the majority of these consent boxes are not really compliant. On a side note if someone says they’re in advertising I automatically hate them, this terrible I know, but they’re usually pompous assholes and that’s how I feel of that industry as a whole.


> if someone says they’re in advertising I automatically hate them

You might enjoy Bill Hicks’ take: https://youtu.be/tHEOGrkhDp0


The real question is: is the other 91% not being tracked?


Of course they're being tracked through any and all means. The whole adtech biz is one big shady shitshow.

You can probably count the ethical, law-abiding adtech firms on one hand.


I work on mobile apps, and I can promise there's no tracking on the apps I've been building if you opt out. The SDKs I've seen seem to do what they promise: no communication to backend when tracking is disabled.

Of course, I can only speak for the apps I've been building and SDKs I've used.


Do you have to explicitly opt out to stop tracking? If so, are you sure your apps are GDPR compliant?


They are not my apps per se, that part is for the company lawyers. I'm just taking care of the technical implementation. It's always some way of opting out/opting in during the onboarding.


One click opt out should be mandatory. It's obviously unscrupulous to force you to go through several pages of text when I could've clicked one button. Most the time I just use a special blocker script or reader mode.


One click opt-in, zero click opt-out would be better.


Under GDPR it essentially is mandatory, or at least it can't be more cumbersome to opt-out then opt-in, and most websites will want a one-button opt-in.

It's just not well enforced yet.


General rule: most people pick the default option if there is one. For example, organ donation [1]: Germany and Austria are culturally very similar, but in Germany 12% of citizens are organ donors and in Austria 99%. The reason: Germany is opt-in, Austria is opt-out.

The interesting thing about GDPR is it officially bans "opt out" tracking cookies - you need someone's consent, although lots of sites interpret that in a way which ... let's just say if they applied the same standards of consent to their private lives they'd very quickly find themselves at the center of the next #MeToo campaign.

GDPR does allow you to make the "Yes" and "No" buttons the same size, so "equal choice" rather than "opt in" - maybe that gets you better conversion rates?

[1] http://www.behaviouraldesign.com/2015/08/11/why-99-of-austri...


Now is a good time to remind users that to easily remove large cookie consent banners from view is to use uMaytrix to block CSS and refresh your browser.

Gorhill’s browser extensions are a must when browsing the web these days.


The position of the "no" is what made the most difference. My SO constantly click on "Ok" without thinking. The number of time she told me something didn't work, I asked her for the error, she said there was none, I then ask her to reproduce the problem and then she mechanically click on "Ok" of an error warning...

Is it really less disingenuous to place it to fit what HE wants versus what SOMEONE else wants? Both still abuse this mechanic... it's just the goal that we agree with.


"On the lifestyle site, three permissions were being asked for. Web statistics (Google Analytics), personalized advertising (Doubleclick) and social media sharing (Pinterest).

Only 1 person out of the 774 who opted into being tracked drilled down and made a more granular choice. That visitor said no to stats but said yes to advertising and social media."

This seems weird to me. Why would you block google from getting the statistics, and then give it to them anyways via doubleclick?


Relevant Study examining different types of banners and their impact on interaction: https://dl.acm.org/doi/10.1145/3319535.3354212 (Preprint: https://arxiv.org/pdf/1909.02638.pdf)


There are some sites (e.g. sites by Vox Media like TheVerge) that are outright illegal according to GDPR. There is only "Accept" and cookie information links that don't include any opt-out options. This is not just a dark pattern, but actually not having the settings on the site. Maybe I can email them not to track me. I wonder why aren't they fined a few hundred millions so that this kind of practice stops.

So my guess overall is that GDPR is not enforced at a larger scale and we are very far from enforcing the requirement to have "Accept"/"Decline" buttons equally usable.


complain complain complain. these threads exist so that people can gripe, together. it's tiring.

instead let's talk about solutions.

safari's cookie and localStorage policy is great and automatic. beyond that, firefox containers are good, albeit effectively limited to isolating a few "top sites" like FB. and then of course, UBO, ABP, ghostery and the like.

it's actually not that hard to take a few small steps (or just do the default things on MacOS) to stop this from affecting you, without impacting your (ahem) user journeys.

first-order fixes are easy. now let's get ahead of these assholes and work on fixing fingerprinting.

TFA is ironically quite interesting in that it itself is SEO content, aka an ad. targeting those that care about not being targeted. i, for one, have bookmarked it.


Wait. You tracked users after they opted out of tracking? How else did you get this data?


Counters are not pii


It's not personal data, so it's not under the scope of GDPR.


Yeah, no. This about the ePrivacy directive, if you don't have proper consent, you can't read/write tracers regardless of wether this is personnal data or not, except for tracers needed to establish the communication or demanded by the user (carts, login, etc).

EDIT: Thought about it, and if you only record the button click and does not identify the user, it works, and I am wrong! In general ePrivacy is very restrictive, only about access to terminal and not about personnal data ( and btw PII is not a GDPR thing, we say personnal data), but here it's ok! So yeah, no to me!


There’s no tracer. Just a counter of how many said yes vs how many said no. There’s no personally identifiable information there


the downside of this method is that it is impossible to discard duplicated negative answers.


Functional cookies (e.g has displayed banner to this user) are fine, you don't need consent


GDPR defines what is PII and then regulates when companies may use PII. A page visit counter collects anonymous data. Anonymous data is not PII. You cannot tell I was their 345th visitor.

>> Yeah, no.

Exactly.


GDPR doesn't actually require consent to process personal data. It requires that the processing be lawful. Consent is one basis for lawfulness, but it is not the only one. There are 5 others.

One of these is that the processing is necessary for the performance of a task carried out in the public interest.

You could probably make a colorable argument that research for publication into the effectiveness of GDPR implementation approaches is in the public interest.


Not if you're a private company or an individual


That basis of lawfulness, from Article 6 section 1(e), is "processing is necessary for the performance of a task carried out in the public interest or in the exercise of official authority vested in the controller".

On the face of it there doesn't appear to be any restriction on who can use that basis.

The only recital I've found that mentions this is Recital 45. It says:

> It should also be for Union or Member State law to determine whether the controller performing a task carried out in the public interest or in the exercise of official authority should be a public authority or another natural or legal person governed by public law, or, where it is in the public interest to do so, including for health purposes such as public health and social protection and the management of health care services, by private law, such as a professional association.

That seems potentially pretty expansive.


You can track things anonymously in a way which complies with the GDPR without requiring consent.


There is a setting in every web browser that controls which sites can store cookies on your computer. They are a core part of the standard web specification along with JavaScript, <img> tags, etc.

If you need a reminder every single freaking time you visit a website, that's your problem not the government's. If you don't want them, turn them off in your browser, install a blocking/auto-wipe extension, use lynx, whatever you want.

Now, I completely agree that websites should clearly state what they are doing with data the user uploads, if it is end-to-end encrypted, etc. The user otherwise has no way of knowing, and it is material to assessing the accuracy of the often-BS "military-grade security"-type marketing and other claims like that. But there is no need for users to "consent" to using a public API of their web browser.


I’m wondering if i should implement the consent prompt myself or use some plugin? If i were to implement it myself is it enough to give two options Yes/No or does the law require me to give some additional customisation options?

Edit:typos


A "I do not consent to any of this" option that you actually respect is perfectly compliant. All of these granular options are to "provide full control to allow users to customise the partners that they trust" (read: extra complexity to put off users exercising their right to not be tracked, plus the marginal improvement to telemetry from the 1% of users that will allow google analytics but not ad tracking).

That said, you do need to inform users of who specifically they're sending the data to (and what they're going to do with it) in the consent option. So "Yes, track me with all your unspecified partners" doesn't quite cut it for the yes option.


Yes you can implement it yourself – given how bad available cookie consent tools are, you're more likely to be compliant that way.

However, consent under the GDPR must be specific. That means the user should be able to consent to or withhold consent for individual processing purposes. Analytics would be one purpose, personalized ads another.

Note that under EU cookie laws you don't need consent for cookies that are strictly necessary for the service requested by the user. E.g. using a cookie for dark mode preference, for a shopping cart, or for the consent status itself is perfectly fine without consent.


I for one have been enjoying the verge and weather.com without any crap or slowness.. just ignoring the consent at the bottom :P

for websites that do not allow me to navigate or do not have a refuse button, I simply navigate away.


If allowed. NO Body, and I mean No Body wants to be tracked in any kind of way.


Sure I do. I opt into any kind of "voluntary user experience research" tracking that a lot of desktop apps offer. As long as a website does the same thing (and doesn't try to gather PII to tie it to my identity), I'm also open for it.


This is nonsense. I mind Google data hoarding as much as everyone else, but if a website or app uses a Mixpanel type tool to see which features people use, then I don't think that's immediately evil.


At this point I would accept some EU bureaucrat putting a surveillance camera in my bedroom if I never have to click another GDPR popup ever again.


When you're made aware of the tracking, why would anyone willingly choose it? It's a sad state of the internet when you're doing 90% of your Google searches in an incognito window.


The worst offenders are the ones that give you instructions on how to disable your ad/tracking blockers. If I wanted them disabled I wouldn't have them in the first place.


I'm actively working on a cookie banner blocker for iOS. Feedback, questions and suggestions appreciated!

https://213tec.com



This law (and the California one) should be amended so that it takes the same number of clicks to opt in as to opt out, and so that ignoring the banner is an opt out.


This is actually already the case with the GDPR, just very few websites practise it that way


I must say I'm really surprised. I'm a frontend developer and probably more keen to keep an eye on this stuff but there are lots of times where I just say yes because the popup is in-my-face and I just want to scroll to the content.

I guess where the article falls flat is where the author says a "proper GDPR content banner" was implemented. No online publication will do this. At least they will trick you with button colors, or some kind of double negative mind trick. Sometimes they will require you to tick all the checkboxes out.

GDPR was a good idea from the start but it's implementation is rather dull - they shifted responsibility to each country without penalization for relaxed enforcing, and now there are countries like mine (Portugal) where we have less than a hundred fines.


> but there are lots of times where I just say yes because the popup is in-my-face and I just want to scroll to the content.

I am exactly the same. I use UBlock Origin and Privacy Badger so pretty much nothing gets through anyway but just to get rid of the banner, I click on OK.

However, that being said, I only do it if the other choice is "Manage Preferences" or something equally vague: If I am given a clear yes-or-no choice, I always choose "No".


> I use UBlock Origin and Privacy Badger so pretty much nothing gets through anyway

And yet the cookie is still there and can be used to track you. They don't need to serve you adds to track you. A simple check for the presence of the cookie is enough to track.


I sort of think this kind of behavior is problematic. I also do it often, but for example Gitlab offers some level of tracking: Necessary, Functional, Performance, and Personalization with the first three preselected. there is also a Show details for more information.

I see no reason to turn off Performance and Functional in most cases.


There's a filter list for uBlock that removes most of those popups. Can't remember it's name off the top of my head though.


Everywhere has poor implementation. The UKs ICO gave companies an entire year of grace as they weren't ready for it themselves and have done very little in the way of fines since. They are as always underfunded and simply don't have the resources to enforce it but so far it is a law with no teeth due to no enforcement the EU over. Companies have managed to get away with the wrong defaults and dark patterns for acceptance for years at this point, setting a clear precadent for how this will work going forward.


I try to make sure to always decline. If there's no easy way for me to do that I leave the site.


> At least they will trick you with button colors

I agree with the broader point, but principles of a nice user interface also apply here.

Unless the website tricks you into clicking a button that you did not intend to click (like with your double negative example) it is not a trick. If you do not read and just click the most colorful rectangle to make the pop-up go away that is the user problem even under the GDPR


I usually reopen the site in private / incognito and "agree all". I don't trust these sites to abide by the modal agreement anyway.


I've noticed on most sites even if you click deny you still get tracking cookies. Enforcement is nonexistent.


This is one of the reasons server side tracking becomes more and more important. Even if users agree, tools like uBlock block client side scripts like Analytics. I build a library for Go [1] to solve this. Check it out!

[1] https://marvinblum.de/blog/server-side-tracking-without-cook...


20 years ago we had popups, popunders, etc... nowadays we have this crap. I don't care about GDPR, I don't care about cookies... I just want the content. That's why I use a browser extension called "I don't care about cookies" which removes these things from websites.

Accepting to GDPR / Cookies: This should be some kind of a web standard , built inside a browser so user can accept it once, ignore or whatever, but seeing this on every website drives (drived) me nuts.


This research is interesting because it's highly relevant given Apple's upcoming changes to tracking consent in iOS 14. Unlike the DNT[0] header Apple are in a position to enforce apps actually respecting the users consent preference, either by technical means or by kicking offending apps off the App Store. The walled garden has many problems but this is one of the benefits. Given that almost all apps and websites have implemented GDPR's consent management in a supremely user hostile way[1] that is far from an equal binary choice I suspect we'll see much higher opt out rates in iOS 14. I've seen people argue that users will just continue to click accept at high rates as they do with current consent management solutions, but I think this is the wrong analysis. Users click accept precisely because the amount of effort required to opt out is unreasonable, when both options require equal effort the number of users clicking accept will plummet.

0: https://en.wikipedia.org/wiki/Do_not_track_header

1: https://twitter.com/K0nserv/status/1279361112627167234


Small changes in UI can have a significant effect on acceptance rate. I'm developing an open-source consent tool (Klaro - https://klaro.kiprotect.com, Github: https://github.com/kiprotect/klaro) and where to put the different buttons, which colors to give to them and how easy to make opt-out is a large debate in our community. By default we favor a very user-friendly approach but we give our users (i.e. the website owners) different ways to ask for consent: A mandatory modal, a consent flow that accepts all cookies/apps (or customize) and a consent flow that accepts only a pre-selection of cookies/apps by default. In every flow there's a "Decline" button that visitors can use, so declining consent is just as easy as giving it. Most website publishers prefer the "Accept all" flow as they usually have a good reason for including a given app/tracker on their website, so only choosing a subset doesn't make much sense. A few sites also implement the mandatory flow, where the visitor won't be able to see the website content until he/she has chosen to either give or decline consent (and again, both options are equally simple to reach).

From a GDPR and ePrivacy perspective it's clear that opting out needs to be just as easy as opting in. Most websites violate this principle as opting in is in fact way easier, and often the UI is designed to be deliberately confusing to the user.

IMHO the consent problem one of the central unsolved issues in privacy though, as most people would not opt-in to tracking if they were given a real choice.


I now see a lot of websites now complying with the "opt-out as easy as opt-in" part by giving you checkboxes for configuring your cookies, with a faint "save choice" button that effectively opts out, and a really obvious, well contrasted "accept all" button. I think that ticks all checkboxes of the law, while still being an obvious dark pattern designed to trick the user into opting in.


Of course they don't; and even if they do, their browser might still block the cookies or discard them frequently.

Tracking effectively is getting harder both technically and legally; and that's a good thing long term but leads to chaotic and desperate behavior short term.

Long term, there are three ways to adapt:

- drive users to apps instead of browsers. E.g. Google and Apple do this serve most of their news via apps where they control the ad experience, tracking, and user signin. There are no anonymous users there. GDPR still applies of course but practically speaking users only have the choice whether to use it or not. And none of the legales specific to browser based things like cookies apply. - tap into other sources of revenue (subscription based, sponsored, donations, etc). Ad revenues have in any case been declining for lots of news sites so this is something they need to do in any case. - switch to non personalized advertising that can still be lucrative if you have access to large amounts of users. E.g. most big brands still advertise this way and still lots of money floating around here. No cookies required.


I enjoy the GDPR if only because it forces websites, even big ones like Google or Reddit, to show me that they value tracking me over everything else, even if it means ruining the user experience and using every dark pattern in the book.

Those websites that are usually all about streamlining and reducing friction to a maximum suddenly don't hesitate to trick me into a maze of slow-loading menus with weird conventions and a purposefully broken and confusing UI in an obvious attempt to trick me into opting in by mistake, even though the mere fact that I clicked the "more options" button means that I'm almost certainly looking to opt out.

These are tactics that I expect to encounter on shady websites, not some of the biggest websites in the world.

Advertising is a cancer that offers next to no added value in our hyper-connected society. Tricking people into seeing ads to trick them into buying stuff they don't need has become the foundation of the web economy. An utter travesty that puts our industry to shame.


So we have an entire industry built upon a coercion? Gonna be a long fight.


In my opinion, GDPR, if properly enforced, is going to be a disaster for many users.

Not everyone can pay for the content they consume. Some people are poor, under 18, live in a country where Visa/Mastercard isn't widely supported or can't pay for some other reason. Internet has made the lives of those people much, much better. They definitely prefer being tracked over having to pay for Facebook, Snapchat, Youtube and all the other sites they use. GDPR forces providers to provide their services to customers who opt out of tracking, and most of them will. That means switching to a payment-only model is going to become the only viable option, hurting a large part of the population.

The "just force them to pay" attitude that I often see here is extremely elitist. For someone making six figures, being tracked matters. For someone barely scraping by or without a credit card, that's an acceptable price to pay for all the goodies they get.


Still, many web sites make it hard for their users to opt out tracking.


Or, they set their cookies first and then ask.

Or, they don't ask anything, and just set their cookies. (case in point: each and every status page by Atlassian Statuspage)


Before I found out about ublock origins filter lists I always wanted to opt-out of these GDPR cookie banners, modals etc but I got a lot of fatigue having to follow all the steps for each website.

I tried to make an open source extension to try and do it for me: https://github.com/pyepye/GDPR-opt-out

Although I hit a snag where some GDPR banners / buttons didn't return when using `querySelector` in the extension but do when you use the inspector / console and do it manually.

Does anyone know why and how to work around it? It was always something I wanted to know why but didn't find the time to dig into (and wasn't important while using ublock origin)


Regarding "engagement rate": is your banner filtered by ad blockers? In my session, there's no banner displayed. Could also be a hint on why mobile browsers are more likely to enable engagement, since they might lack CSS filtering techniques (used to hide the banner).

Other point: third party hosters. It's good to see you as the website creator put effort into GDPR compliant behaviour! Did you also include Netlify and your GDPR-provider into the evaluation? Do they use additional tracking technologies?

btw, your post was copied to https://www.facebook.com/BloggersWorldToday/posts/6238368718... fyi


yet another EU law with tons of loopholes.


It’s like ads are something nobody wants.


> But writing is on the wall. If your business model requires user consent, chances are that your business will suffer if and when GDPR gets enforced. The implication of users not giving the required consent is that the ad-tech industry might collapse.

Is the writing on the wall? GDPR came into enforcement over 2 years ago, and I'm not aware of improvements having been made with respect to clarity, because I'm not aware of any punitive measures actually having been taken. Would love to hear comments to the contrary.


The GDPR is designed to be explicitly opt-in. Given compliant consent dialogs, of course very few opt in. Hopefully few enough that keeping the tracking infrastructure just for that minority isn’t worth it.

The GDPR is and should be effectively a ban on tracking ads once sites actually comply (or, in many cases - leave the EU market or go under instead).

Whether the alternative is a good solution for paying for content or if it’s the end of the majority of content online isn’t really interesting as both outcomes are better than the status quo.


We value your privacy. Accept.


This looks like an ePrivacy (cookie) consent dialog, and not a GDPR dialog? https://markosaric.com/wp-content/uploads/gdpr-banner-on-mob... It's asking for consent to store cookies "to power statistics".


GDPR consent should be mandated for, added, and enforced at the browser level. Not a different popup for every website, but some way for the website to include some JSON say and have its rules shown in a browser native UI.

And once set, the browser should pass the user decisions to the website, and enforce those that can be enforced locally (at the browser level).


is this the end of tracking in the ads industry? Hopefully so.


The amazing thing is that the vast majority of websites do not even need to display a warning or anything. The gdpr doesn't apply to vast portions of the internet. He it doesn't apply to the vast majority of the internet


Why would one need GDPR consent for blog?

Privacy Policy should be enough for server logs (without PII). It would be nice to have standard Privacy Policy though (like we have MIT, BSD licenses).


I run a blog. It has Google Analytics. I could probably host my own analytics solution, but that's not easy. I'll get to it eventually, but content benefits my users more.

I need analytics because this blog pays the bills. I need to see what works and what doesn't. When building partnerships, I'm usually expected to share some numbers with them. It also lets me spot issues with the website.


Thank you, I understand convenience for author / inconvenience for reader.

I do not use Google Analytics but it looks like it is possible to disable Cookies [1], anonymize IPs, disable data sharing with google [2]. Effectively making it almost third party server logs analytics (no consent required). Would remaining functionality be sufficient for you?

[1] https://law.stackexchange.com/questions/36105/can-usage-of-g...

[2] https://law.stackexchange.com/questions/35528/is-it-really-p...


Yes, that would be far more than I need. That's the problem, actually. I could track a fraction of that and be really happy.

I tried Plausible today, but their event handling is completely insufficient for my needs. It's basically a key-based counter. The keys have to be created in advance, and can't be categorised. This means I couldn't track outbound clicks without first creating a new event for every known URL on my website. It just doesn't work.

It's a shame, because I loved every other aspect of Plausible.


Assuming for a moment that the test websites didn't give a specific reason to trust them more or less than any other website operator. This means that if your website has significantly more than those 9% consent, you're either perceived as very trustworthy, or your GDPR banner is confusing (or, less charitably, deceiving) users.

Would be interesting to see how the consent rates for big offenders like techcrunch, newspapers etc are. IIRC there were specific studies about which brands are trusted by consumers, and Comcast et al. didn't fare all that well in those.


I tried Tech Crunch without adblocker and I would have to click out 16 "partners" after clicking "read more" two times. There is like hundreds of "IAB Partners" of which an unspecified amount need separate opt-out on their websites. It is not clear if "select all" opts out or in on the "IAB Partners".

So I guess exactly zero visitors opt out there in the intended way. So 100% "opt-in".

I tried to opt-out from all to see how many cookies would be set anyway but gave up after 5 minutes.


Yes, I've also seen the GDPR notices where there is no "select all" function and you have to manually disable hundreds of separate options.


This is why the GDPR is a bad law. Of course nobody wants to be tracked. If you believe that the harms of tracking outweigh the benefits of increased revenue, then just ban tracking altogether. Instead, the GDPR incentivizes sites to come up with these dark patterns so they can claim that users "voluntarily" "consented", when everyone knows that's not the case.


How can we report websites who use the blatant dark patterns? I’m sure they must be violating GDPR.


How cycnical - an article further down (Two young scientists built a $250M business using yeast to clean up wastewater) links to Forbes.com that indeed allows me to set my tracking preferences.However, anything other than 'accept' will result in a message stating 'we are processing the request <snip> this may take up to a few minutes'. So they can set hundreds (yeah - that is right, I have seen pages tell me they wanted 500+ cookies to set, some of them to last 20 years) of cookies in a second or so, but will take minutes of me waiting for the page to show while they 'process my preferences'?!?


This is like the companies that mysteriously lack the ability to unsubscribe you from their mailing lists in less than 30 days, even though plenty of us manage to run systems that normally do it in real time.


There is an interesting Twitter thread about how this process worked in a UK bank, although their turnaround was about 4 days

https://mobile.twitter.com/Joe8Bit/status/115631296526570701...


All that stuff is gone if you block javascript on forbes.com


I’m rather shocked they allow viewing content without it.


Probably a bug.


More likely they know most users won't do it, and it probably aids in some SEO or something.


GDPR opt-in questions train people to accept forms without reading or thinking. I think this will be dangerous.

And I'm really irritated that I can't access local news sites in America because they aren't compliant -- so they blanket ban European access. That's deeply problematic.


>And I'm really irritated that I can't access local news sites in America because they aren't compliant -- so they blanket ban European access. That's deeply problematic.

The usual answer I get when I complain about this is either "It doesn't happen to me" or "They wanted to steal your data. You're better off without being able to use their site."

I've yet to be convinced by either argument as a European. It's almost like a knee-jerk reaction by some Europeans that if Americans do something we don't like then they're automatically in the wrong.


I personally see as usual business of the internet. Every country has their rights to regulate internal markets and every company need to decide how many markets to cover.

Personally I like GDPR more than no-GDPR and think that it would be nice if the US had a GDPR compatible regulation (as in the EU and the US diplomatically agree that it is enough for US company to respect the US regulation and for European companies to respect the GDPR to be compliant under both). On the other hand as laws stands I appreciate that they take this law seriously, even if not in the way I would like most.

Overall I do not think that losing access to local US news sites warrants remaining in a GDPRless world


But it's not just local US news sites. Plenty of other businesses won't do business with Europeans as a result. Even video games shut down over this (Ragnarok online, I believe). But the real question is the signal GDPR sends - how many online businesses will simply not get created (especially in Europe) because of GDPR? The nebulousness of the situation is why I've decided to not create at least one website before. I'm sure the lack of my website is no loss for the world, but what about others?

And keep in mind that none of this will actually save your privacy. A Chinese service could steal all of your data, sell it, and nothing would happen because they're outside the jurisdiction of the EU. The only thing that will save your privacy is not giving out the information in the first place.


>> how many online businesses will simply not get created

How many online businesses [that aim to quietly collect PII for their own monetary gain] will simply not get created?

Hopefully none of those actors will get created.


> A Chinese service could steal all of your data, sell it, and nothing would happen because they're outside the jurisdiction of the EU.

But now it is illegal, the thing that might happen is that the EU bans Chinese apps like India did.

> Plenty of other businesses won't do business with Europeans as a result.

I believe you, this is something to consider when making stricter regulations. The same happens with food imports regulations and many other fields.

Some of the business that will not be created are business that should have been created, many other will be legitimate business that will simply consider the risk of the nebulousness of enforcement too high.

One example early on was that it is now unclear whether industrial sensors are by accident in violation of GDPR because the resolution needed for industrial application is enough to distinguish different humans.

My uninformed opinion is that these are solvable problems and personally I blame decades of recklessness in this field that made this necessary.


If a U.S. site blocks you from accessing it, it's the fault of the U.S. site, nobody else.

HackerNews is a U.S. site, and it works fine in Europe


Agreed. I'd guess for ~90% of web users, GDPR popups have lost all meaning. The thought process is not about opting in or out, it's more "how do I close this popup as quickly as possible?".


I am not convinced that GDPR will ever really be enforced. Severe breaches have mostly been ignored so far and the minor dark patterns that dominate GDPR compliance popups and wrong defaults are going to be the tail end of compliance enforcement. Just like cookie laws before it GDPR will probably just make websites annoying with obnoxious implementations and remain largely unenforced until the EU does something else to try and fix this class of data problems and the cycle will continue.


One really incredibly annoying trick that I see more and more is when they pre-check only the "mandatory cookies" option, but then when you want to confirm this selection you end up allowing all tracking cookies. It's because they make the confirmation button less prominent, and something that looks more like the typical confirm button is actually "allow all cookies". I guess a lot of people just click on it automatically.

edit: sorry, replied to the wrong post


There are a lot of dark patterns. The most common stuff I am seeing (and its basically everwhere) are:

1) Accept being brightly coloured and decline as white so its less prominent.

2) Having accept all be a simple thing but decline being a more information that requires turn lots of individual things off.

3) Requiring the decline to be individual across hundreds of individual cookies.

4) Clicking accept all is stored and used forever but decline is asked everytime you come back to the website.

5) Having the decline process take minutes to complete as if significant processing is required.

6) Having the default be acceptance.

I think breaches of GDPR are the normal, 95% of the websites I see these popups on is breaking the law in some way or another and at this point have been doing so for years.


It won't be enforced as long as the incentives with the power to enforce are not aligned with the users. Probably never.


And a large proportion of that 9% is tricked with misleading labels on buttons.


There are three buttons with very clear functions in the popup the author used.


In general.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: