As someone that digs in this data for a living, personally strongly believe this should be on by default for all browsing. The example they give is an insanely bad design for healthcare.gov and I would absolutely not want to find that type of data in my analytics telemetry.
It is very useful to know where your traffic is coming from, but that’s usually viewed at a higher level than the querystring params being shown. In some cases, this may restrict you from knowing which article the person was reading on the given site before clicking through to yours, but if that’s so important, there are other ways to instrument source tracking.
Yes it's great from a marketer's point of view. From a consumer's point of view, I don't want you to know where I came from. I don't really care if it's useful to you. It's the difference between pleasing marketers vs pleasing users. Firefox chose wisely
We overthink it when most of the tracking on the internet is based on simply your IP and occasionally a fingerprint of the browser(including any identification done by said browser). Since IPv4's inception that's all that was required and it was an extremely robust way to be tracked if you ask me.
If you want to not be tracked you require a random/different IP and a browser fingerprint that blends with the crowd.
Don’t give up so easily. Anything can be changed, provided enough people want to change it. Most undesirable practices only happen because they operate in the shadow of general ignorance and/or oblivion to their existence.
As generations that understand this tech implicitly come to power, things will change. To someone that does everything digitally the meaning of these words will be quite different than the currently accepted interpretation:
“The right of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures, shall not be violated, and no warrants shall issue, but upon probable cause, supported by oath or affirmation, and particularly describing the place to be searched, and the persons or things to be seized.”
That strongly depends on the routing design of the web application. I could easily design a bad url-path that dicloses information that should be kept private...
Don't worry, the cookie warnings (actually third-party tracking warnings) will soon (May) become not useless at all.
From May on sites have to offer yes/no, they may only track you if you choose yes, and if you choose no they may not ask again, and may not refuse access either.
Which is what was originally intended all this time, and which all good websites offered anyway.
I'm very curious about this. How will you keep track of a user who has opted out without using a cookie? That's my understanding of the issues - websites keep asking not because they think they can browbeat you into accepting cookies, but because they can't tell the difference between a user who has opted out of cookies and a first time user.
There was never a blanket prohibition against cookies, that was a misconception (pushed purposefully by some, I suspect). Cookies which are "strictly necessary" in order to provide the service don't need to be approved. A cookie to remember that you don't want unnecessary cookies seems to fall well under that exception.
If I were drafting the regulations (fortunately I have neither the legal nor technical expertise), I'd make it legal to store without permission a cookie containing only a specific string defined by the regulation, e.g. “optout” or “0”, meaning “this user has seen the question and opted out; don't bother them again”. The regulations would have to ensure that the cookie's timestamps are suitably generic to prevent fingerprinting — perhaps you can store a date but not a time.
Would something like that work?
(Edit: Ah, apparently this sort of thing is already allowed, according to comments I didn't read before writing this. Sensible legislators.)
They could use local storage? As local storage isn't sent to the server, the server will not know whether the user opted out. (User opt in will be a cookie still.)
Alternatively, the legal definition of tracking cookie can differ from the technical definition of cookie. It wouldn't be the first time...!
I won't argue about these warnings uselesness, but they definitely are a pain in the ass because they require enabling JS, at least temporarily, on sites that otherwise would work perfectly without it to be able to answer to these warnings.
How stupid is that? Privacy at the expense of security?
You misunderstood me. It may not be strictly required for sites to use JS to build these warnings. But in practice, today, that's how it works on sites that display them. And it definitely requires the user to have JS enabled to pass these obnoxious pop-ups.
JS is also used for things that are doable with CSS or even HTML only. This isn't the first time.
My tin-foil hat persona suspects it's yet another way to force me to disable the various script and ad-blockers in the guise of protecting users for privacy concerns. But my more realistic side assumes this is just laziness.
(Tangent, but I’ve never found this particular example compelling. All that’s in my underwear drawer is underwear... and I don’t find that particularly embarrassing? I think I’m missing some cultural context for it)
In the US, at least, the underwear drawer is a popular place to hide other things ... eg, your illegal drug stash, or the engagement ring you bought, etc. I think there's also subtextual implications of nosing about in your sex life.
It's a mild, nonspecific stand-in for whatever it is that you really don't want people knowing. Whether that's your weird fetish or your embarrassing hobby or the fact that you worked as a pianist in a brothel or whatever.
That domain belongs to Tealium IQ, an alternative to Google Tag Manager.
Tag managers allow you to encapsulate third party tags (and really, any JavaScript you want) and have them dynamically injected without having to hard code anything else on the page.
Since that was blocked, you never even saw the other tags that would have attempted being ran.
On a side note I recently learnt that ESR is against Obama care. Surprising, I can understand someone young not liking having to pay mandatory insurance but he's old.
As a user I don't want you to know that information. I wish there was a way to click on links with the referer field removed entirely from the request(maybe via context menu)
Yes, it strips off for any URL.
For only third parties, I believe 'network.http.referer.XOriginPolicy' is the relevant flag (haven't tried it out myself).
I use RefControl in Firefox. I've set the default to be the root of the destination site. That is, for example, clicking a link to https://foo.com sends https://foo.com as the referrer.
As someone that doesn't dig into this data for a living, why would you not want this on non-private sessions (first question I had when reading this)? I'm curious why Mozilla wouldn't want to protect this data leaking in even a normal session.
tl;dr - strict-origin-when-cross-origin was one of the protections with the lowest amount of breakage. Entering Private Browsing is a clear, strong signal that the user wants more privacy, so we started by implementing this protection in Private Browsing.
However, note that some advertisers demand that AdTech vendors must not serve their ads on certain kinds of pages. (e.g., https://support.google.com/adsense/answer/1348688?hl=en&topi...) Many of those agreements require full referrers to be able to audit the ad inventory.
So there are some concerns and trade-offs to make in this space.
In all seriousness: why should we care what ad companies want? Mozilla makes a browser, not an ad platform. Advertisers may have to adapt, but that's their problem.
Many of the ad serving SDKs slurp up all this data anyways and send it down using a request body, query string, or something that isn't the "Referer" HTTP header. I'm not sure how ad verification providers get this data, but if anything it's just going to mean less information leaking out from systems like DFP to the smaller ad tech firms. But firms like Google will still be able to figure out what path you were on.
Why don't websites just remove ad code from pages they don't want ads to appear on, instead of telling the ad server to not display it on such pages by using the referrer information? Seems like a convoluted way to solve the problem.
No, the advertiser doesn’t want their ad to appear on a certain publisher’s web page (e.g. banks often can get in trouble if their ads are associated with certain kinds of content).
Ah ok. But why then doesn't the ad's embed code just contain the URL of the page it's on in the query string / POST data? The (website that shows the ad's) server knows what page it is providing.
I suspect it does? But that's spoofable by the ad aggregator in a way that the Referer header isn't. So much of ad design is (for better or occasionally for worse) defending against bad actors.
Thank you for your efforts. Features such as this are appreciated by, I'm sure, many of your users. It's a shame that the slating Mozilla has had recently hasn't been met by praise for work like this.
It depends on who “you” are in this case. If that “you” is a company that solely makes money by selling various behavioral data and has little care about various jurisdictions (which they really should, especially with GDPR, let a long “doing the right thing”), it might very well be useful to collect extra signals.
However, if “you” is a company that is in some other business, but collects analytics for the purposes of optimizing it’s product and figuring out what actually works for their users, then you don’t event want to run the risk of ingesting something private. Especially with exposure to different jurisdictions.
Think of it as the difference between a double-opt-in email list where you are very sure people want to receive the communications and an unsolicited spam list. If the user volunteers this data and it has some relevant business purpose, that’s great. However, if the user doesn’t know I have this info and wouldn’t want me to have it, then acting on that data could create a lot of negative emotions that as a company I wouldn’t want.
My speculation is that this is a compromise in some sense. The change here aligns with what private session is supposed to be.
Private browsing will(1) start out with no sessions (none from normal browsing mode), (2) provides automatic tracker protection, and (3) will clear sessions (cookies and history) on exit leaving no trace. Private browsing is great for a number of use cases such as debug/test web app with a new session, visit NSFW, or hide the fact you watch cat videos on your puppy-lover friend's computer.
So it makes sense to hide the referrer in private sessions opt-in automatically. As for the normal browsing, you can disable referrer. See [1].
I would like to see more privacy and security options to be shown in the preferences UI. about:config is "okay" but it feels like Windows registry, not good for user experience, even by developer standard when one just want to toggle on/off privacy and security settings.
> why would you not want this on non-private sessions
Information leaks. You click a link from your email to a news article. The URL for the news article has your email in it. Then you click an ad on the news article. The ad just got your email address.
Worse still - instead of an email address, it's a token that auto signs you in to your account with the news site.
Sites are pretty bad at sanitizing their outgoing referers.
I don't use anything but web mail (Fastmail) since 2002. I do my banking and email in one browser and my general browsing in a very locked down Firefox (Many about:config hacks, uBlock Origin, Decentraleyes, Webmail Ad Block, Privacy Badger, and No Coin) that blocks about everything save the actual content. I also run a Pi-hole. Keeping your actions compartmentalised keeps your privacy better controlled. Nothing is perfect, but keeping things separated is generally a better idea than not doing so.
If you're not aware of it, you might want to look into Qubes[0], it's an OS that uses virtualisation tech to compartmentalise applications from each other (and the host OS). So you can use your browser of choice for all uses, just in different VMs configured to retain/discard data as you want.
To add to everyone else: got to start somewhere. Private browsing seems like a good place, and I've just taken advantage of the plumbing it introduces to set strict origin referer in non-private browsing too.
Hopefully, even though groovecoder doesn't mention the possibility, if this works well then we'll see it rolling into regular browsing too, or at least having some UI exposed, in the same way that tracking protection moved from being private browsing only to having an option to enable it in regular browsing.
Actually, now I think of it, I wonder if the two could sit behind the same preference in the end?
I can't recall ever encountering a referer-based authorization scheme that included the path or querystring in its logic--although it seems likely some goofus has implemented it, somewhere.
I actually just finished implementing a very simple JWT single sign-on feature. We don't use the referer for the authentication/authorization. However, having the path/query in the referer does allow our partners to know what (if any) post-login redirect path/query to include in the JWT to provide end users a seamless re-authentication experience.
We could, of course, build the same functionality into our implementation without relying having the path/query in the referer.
I have seen sites where the "Download this foobar" link only works when the referrer is the "Details of this foobar" page, where the download link is placed. If you sent a referrer of some other page on the site, e.g. the "list of all foobars" page, downloads would fail.
It's less common now that web developers have prioritized sane urls, but a lot of sites still have the actual page as a query param like /site.php?page=foo instead of just /foo. Stripping query params by default would render referrers useless on those sites.
This is horrible and I believe the user should be informed.
Personal data in 3rd party referral is only one of the many side effects. If personal data is available in the URL as query string, chances (very high chances) are that the same data is perfectly visible in clear in the web server logs, and from there only God knows where it's spread, including all the 3rd party services used on the backend side.
> As someone that digs in this data for a living, personally strongly believe this should be on by default for all browsing.
Yeah, but it may surprise you to know that your opinion isn't shared by the people who own the computers in question, or the private data you're digging in.
Your personal strong belief is, in this case, utterly irrelevant.
I believe they were saying despite being in that business, they -still- think there should be much increased privacy, like the features discussed in OP more widespread.
They don't seem to have said so clearly enough. I think some people just read the first half of the sentence and are ready to be mad!
> To help prevent third party data leakage while browsing privately, Firefox Private Browsing Mode will remove path information from referrers sent to third parties starting in Firefox 59.
Emphasis mine.
It only does it in private mode. I experimented with the referrer options mentioned in the article with mixed success. Not sending the referer header breaks some sites and often in a non-obvious way.
Well, probably I would be fine if they would remove the referrer completely (if it has a different domain than the current site, otherwise they might notice the tampering).
After all the referrer is useful for the site owner, not for the browser user.
Some websites rely (at least in part for verification) on referrer for the authentication process, I can imagine that causing some headaches for people.
I get what you're saying and I agree but it is a bit more nuanced. Say you want to read a certain news article, and if you get referred to it by Google you can read it, otherwise not. Now, lets say you've been finding news articles to read that way via Google the past years. Suddenly, with Firefox 59, this no longer works. Without even informing the user. That is not user-friendly.
Sites should be penalized for giving preference to one kind of link over another. Why should Google have even more of an upper hand than they already have?
To me this is a super good illustration of why we probably should get rid of the 'Referer' (sic) header altogether.
Agreed, but the short-term loser of this change is the non-technical customer who's workflow is suddenly broken, and only in Firefox, and only in version 59.
Apparently only in Private Mode though which makes my use case probably less common.
Thing is, do you remember the uproar about addons being broken in Firefox 57? It was for good (security) reasons but many non-technical users don't understand or respect the pros and cons. All they see is their workflow being broken.
Even I as a technical user saw my workflow broken and to this day it hasn't been resolved. So point taken. Also: I'm not convinced that the FF57 reasons were as good as stated, it seemed mostly a case of 'we can therefore we do and here are some reasons to make you feel good'. A backwards compatibility switch to override the change would have been the least.
Only Microsoft seems to really understand backwards compatibility.
> A backwards compatibility switch to override the change would have been the least.
They changed so much in the code base which broke compatibility with the old system that they would have pretty much had to do a hard fork and include both Firefox versions into the download in order to provide this; pretty much doubling maintenance cost as well.
On the flip side, they do now actually have an actual extension API, abstracting away from that code base, making it much more unlikely that such a breakage will be necessary in the future, and it especially also prevents those frequent smaller breakages that were commonplace with every new Firefox release up until 57.
> Some websites rely (at least in part for verification) on referrer for the authentication process, I can imagine that causing some headaches for people.
You mean some websites that fail to employ the best practice of “don’t trust external input”. Yes that applies to easily-spoofed headers.
Even still it still passes along the domain, so that'd still be possible. I'd guess that nobody would need to validate past the domain (and/or subdomain) level.
Maybe they could add it for an advanced setting, but having it opt-out for normal browsing might not be desirable for the average user (if not sending headers breaks popular sites).
Not exactly what you ask for, but just as a note: I had to add Polish online train ticket service to the whitelist in Smart Referer add-on, as it always fails with meaningless "invalid site call" error otherwise :P
I think it's due to dubious "security" implementations, including at least one that was (?) present in a rather notorious PHP message board software package [1]. I'd be surprised if this survived all these years later, but then I've also seen some custom session handling code in my travels that did something similar even as recently as 2-3 years ago.
Some image boards do it to prevent casual hot-linking, as my sibling poster notes.
This one of the reasons I tend to flip completely whenever I see healthcare providers and their suppliers run google analytics tags inside their logged in areas (yes, this really happens). Besides the questionable value of having such tracking inside the logged in areas (it's healthcare, they are not going to worry about their conversion rates) such information should simply never leave the premises. Better still if they didn't do this in private mode but always. Private mode is still associated with doing something sneaky, rather than that it should be the default.
Happy to see FF do the right thing here and I'm really curious if Google will follow suit. Microsoft and Apple have an opportunity here to show they care about end user privacy than Google.
First, you're totally right and I completely agree. That said, check if those sites also set CSP headers (see https://content-security-policy.com). All modern browsers support them, and it lets you set granular policies like "send full referers to my own server, but only over HTTPS", "HTTP connections (which I shouldn't be seeing anyway, but to be thorough) to myself only get the host and path, not the query string", and "connections to any other domain should include only my hostname in the referer". It's quite nice actually.
> it's healthcare, they are not going to worry about their conversion rates
Monitoring conversion rates can be used to find out whether people are actually able to use your web service. The goal of a “conversion” doesn't have to be a sale.
But I agree that if you're going to do this sort of tracking, it definitely needs to be private.
I hope there's a court case soon where the court rules that sending a whole load of business-sensitive data to Google, Microsoft and Apple actually does breach a non-disclosure agreement.
There will definitely be such a case in the EU, that GDPR has some pretty impressive fangs and either a healthcare provider, an ISP or an insurance company is going to make an excellent example.
Coming to a courtroom somewhere in Europe in 2019.
In about:config, setting 'network.http.sendRefererHeader' to 0 (default is 2) will stop the referer header from being sent, and the document.referrer from being set. See http://kb.mozillazine.org/Network.http.sendRefererHeader for more information.
That page is quite outdated (2007), a better reference for about:config referrer options is https://wiki.mozilla.org/Security/Referrer (which is linked to at the bottom of the blog post about FF59 being discussed).
It seems that Firefox 59 will effectively force about:config's network.http.referer.XOriginTrimmingPolicy to 2 (default is 0) when in private browsing.
Disabling the referrer header completely can prevent you from signing into some sites unfortunately. Another option in Firefox is to use network.http.referer.spoofSource.
In the URL bar, type:
about:config
Search for the following setting:
network.http.referer.spoofSource
Double-click to set its value to true. This basically sends the destination or target URL as the referrer.
I have not had issues thus far signing in on sites, though that might just have been dumb luck. I appreciate the heads-up and the additional information, thanks!
Similar for Chrome, there's a flag at chrome://flags/#reduced-referrer-granularity
Description: "If a page hasn't set an explicit referrer policy, setting this flag will reduce the amount of information in the 'referer' header for cross-origin requests."
This is largely the part of independent contractors and subcontractors. Contracts go to the companies good at winning contracts, not necessarily the best company to do the job. The contractor takes an obscene profit for providing no value and then subcontracts the project to various subcontractors who may or may not employ actually qualified and skilled engineers.
The government employees managing the contract typically do not have the expertise to evaluate the project or write proper specs. The HealthCare.gov contract was a mess of incompatible buzzwords.
The engineers have no vested interest in the project as they're only there to complete that contract and they're so many levels removed from the government agency that no one actually knows who they are so it won't reflect poorly on them when everything comes out poorly.
Because their career doesn't really depend on the success of the project, as the government gets blamed for contractor failures while contractors get the credit for success, they don't really need to do more than meet the specs. A better way to do this would be to expand the number of engineers within the government through groups like 18F and USDS, and give preference to them over private industry.
Private contractors rarely work, but even when they do it's only when you have expenses that the government doesn't need (such as contracting a machine shop or car manufacturer to build something with their preexisting infrastructure). In software though, your only expenses are really your engineers and the cloud (as no one needs to run their own data center). The only thing subcontractors can do that the government can't is pay their employees more than the GS scale. However because the contract is supposed to be cheaper than the government just hiring employees themselves (as industry has "profit motive") they're going to have to cheap out elsewhere, either by hiring fewer developers or neglecting parts of the development.
All of this is solvable by Congress, simply boost pay flexibility, but there's no political motive to fix it as all of the contractors are political donors. As a result, government software sucks.
That can be extraordinarily expensive once it leaks out that classified government data is in the hand of uncertified third-party cloud in some other nation, and you have to rush and pay twice or three times more in order for the contract to be changed and now have local certified supplier. This is what happened here in Sweden in equivalent departments for the DMV, which later implicated a further 40 different government department which used the same practice.
When the cost go up by 200%-300%, suddenly the idea of running your own data center sounds much cheaper. It ended up being the highest single cost the departments had, excluding salaries and rent. you can get quite a nice data center for those billions.
The example you bring up in Sweden is exactly what I'm talking against. Stuff was outsourced to other companies for the lowest bidder, who were not held remotely as accountable as the government.
>IBM took over the agency's IT operations, and "IBM used subcontractors abroad, making sensitive information and an entire database of Swedish drivers’ licences accessible by foreign technicians who did not have the usual security clearance".
IBM used subcontractors, which is the profit maximizing stuff I'm talking about. When you pass stuff off to a for-profit corporation, they're going to do what they can to maximize profit, even if it screws the government over, because people will blame the government, not them.
>When the cost go up by 200%-300%, suddenly the idea of running your own data center sounds much cheaper. It ended up being the highest single cost the departments had, excluding salaries and rent. you can get quite a nice data center for those billions.
The costs never were lower though. They just looked lower on paper because the bill was less. But they weren't actually getting what they paid for.
There are already cloud providers certified for government use (at least in the US). But you don't need to pay a company to pay some other company. Government employees can do that fine.
You're not wrong - there are a lot of crappy contractor teams out there working for the government... just like in the private sector.
But I've never seen a gov. contractor purposefully add analytics code.
It's far more likely that one of those free frameworks, ui-kits, or fonts, benevolently provided by one of the privacy-invading Silicon Valley behemoths, ended up in the code base.
Anyway, there's a good reason that the government doesn't hire their own developers. Hiring a GS14 (at least), who writes code all day, is going to end up being far more expensive than a contractor after paying the lavish benefits, pensions, etc. which federal employees receive.
Furthermore, most government projects are only a few years long. The government uses contractors because they can get rid of the dev teams when they're finished with the project. Can't do that with gov workers.
Every decade or so, there is a push to use less contracts and hire more in-house Federal workers. And then the payrolls become bloated, and the next administration goes back to less feds, more contractors.
>after paying the lavish benefits, pensions, etc. which federal employees receive.
Federal benefits are not that lavish on an international scale. You get things that are considered human rights in other countries and a good pension.
And while the benefits may cost the government more in the long wrong, the subcontractor is still taking a profit off what they're paying developers. I'd rather tax dollars go to the actual worker than some corporation.
>Furthermore, most government projects are only a few years long. The government uses contractors because they can get rid of the dev teams when they're finished with the project. Can't do that with gov workers.
That's part of the problem though. If you're on a contract, you know you're expendable. You have no skin in the game other than doing the bare minimum to not get fired because the actual client (the government) is several layers removed from you.
And getting rid of contractors after you're done with them doesn't really add up to software. For one, many contractors work as contractors for years, moving from project to project. You can do the exact same thing in-house. Groups like 18F and USDS mean that you can move employees around as necessary.
> Contracts go to the companies good at winning contracts, not necessarily the best company to do the job.
This is worth a deeper explanation. "Good at winning contracts" involves a lot of things that have nothing to do with the ability to do the job. From legislated preferences for "minority- and women-owned" businesses (which is usually a farce), to kickbacks to the bureaucrats who award the contracts, there are a LOT of reasons why government hiring "private" companies to do work goes wrong.
> This is largely the part of independent contractors and subcontractors. Contracts go to the companies good at winning contracts, not necessarily the best company to do the job. The contractor takes an obscene profit for providing no value and then subcontracts the project to various subcontractors who may or may not employ actually qualified and skilled engineers.
Frankly, I don't think that's really the case / honest here. There's been a movement to create "readable URLs" over the last few years, how many people considered that this could leak information through referers? I really can't say I remember seeing any discussion of that issue.
Clearly not enough people considered that it'd leak information, but awareness of it was sufficient that most mail providers and large portals had redirect servers to "clean" URLs of meaningful referrers from their pages already by 2000 or before.
So I agree it's probably an issue that people still don't understand this, but it's not a new issue, and there's really no excuse for not knowing this.
TL DR of what you wrote is basically Government doesn't know what it is doing and can't be bothered to change. If that is the case I doubt if there is anything you do can change that unless you held Government accountable for it.
For this purpose I'm separating goverment employees and legislators. While government employees may not know specifically about software, that makes them like most clients. You need PMs and engineers to figure out what to do as the project progresses.
When there's a subcontractor involved though, there's an additional profit motive involved as well as requirements legislators set up to maximize the profit for the subcontractor.
It also means requirements need to be written out in advance and in a way that may not be optimal. When you're working between employees of the same organization, that's a lot less of a problem, at least as long as legislators don't put plenty of hurdles up.
What of the story that there was later a good team who turned it all around (https://www.theatlantic.com/technology/archive/2015/07/the-s...). It just one article, but I had heard something about the team also working successfully on other government projects. Doesn't rule out mistakes/lack of incentives going forward, but I'm certainly left with a good impression of them.
They were specifically using a monitoring tool (Floodlight, for tracking conversions) that happens to be provided by an ad company. To my view, it's more of a security issue than a privacy issue, as DoubleClick was prohibited by TOS from using the data for advertising.
If Mozilla genuinely prioritized its users' interest it would block ads and tracking networks, which are the major way people's private information is leaked and also a primary vector for hacking.
And yet for some mysterious reason Firefox hasn't broken ranks with Google by incorporating ad blocking. Even though its an obvious major feature and Firefox is losing marketshare every year.
We know why Google won't prioritize the interests of Chrome users but why is the only major independent browser seemingly corrupt in the same way?
Mozilla should be helping society by pushing it past an era of internet advertising and the clearly terrible clickbait-fake-news culture it creates. And yet, it does not.
Is Google using the money it pays Mozilla to "discourage" Firefox from going forward with ad blocking? As a concerned citizen, I sent an email to antitrust.complaints@usdoj.gov requesting an investigation. Anyone with insider info should send it there.
If Firefox disabled ads by default, a large number of sites would likely block Firefox or set up angry messages telling the user to switch browsers. Yes, you could bypass that, but most people wouldn't bother. Add in the logistical issues, and the fact that some of the things blocked are beneficial, then I see no reason to expect the conspiracy you suspect.
If a major service blocks Firefox users it could lead to a massive boycott. Users could rally around Mozilla if it stopped selling out their interests.
More people use ad-blockers than Firefox has users. The best way to attract more users is to make the best possible browser. That means incorporating ad-blocking as users have loudly demanded for a decade.
> If a major service blocks Firefox users it could lead to a massive boycott.
... of Firefox.
Most people don't care as much about their browser choice as about the ability to access those major services. I would continue using Firefox, but only because I don't care about most of the popular sites. The majority would switch browsers in a heartbeat.
It wouldn't lead to a massive boycott, most people barely care about browser choice. And what would a boycott accomplish? If they're blocking you for not giving them ad revenue, your refusal to go there changes nothing.
Over the past decade, I've had to switch ad blockers several times to deal with performance and privacy issues, and I still have to deal with a ton of broken sites that would make most people just assume their browser was broken. The logistics of including the blocker in the browser are a nightmare, and I doubt the results would lead most people to call it the "best browser."
Webpage owners need a way to make money. A whole class of webpages, those that are financed by ad revenue, would stop being tested/built against Firefox.
They might even block Firefox users from accessing their webpage, so that people switch to other browsers which do make them money. They have no real reason not to block them.
There's a whole load of politics like this in being a browser vendor. Just because users demand a change, does not mean that it's the best for users. Users want broken webpages even less than they want ads, they are just too short-sighted to realize that blocking ads will lead to that.
And Mozilla really is already gambling hard with such things as Tracking Protection or this referrer blocking when Chrome has no such restrictions in place and has the majority market share. For many webpage owners, blocking Firefox users would probably not even drop their revenue by 10%. Leaving it unmaintained much less so, in the short term.
So, unless Mozilla manages to create perfect compatibility with Chrome, they cannot afford to block revenue from webpages. And creating perfect compatibility with Chrome would also mean blindly following all of Chrome's webstandard choices, ingraining tracking- and ad-supporting technologies into the web, which again would be shit for users in the long run. And really, just use a Chromium-fork, if this is what you're looking for. You don't need Firefox for that.
#1 this should be on by default. I might be missing something, but do sites really need the referrer? What would break if the browser sent the same page as referrer, or google.com/ or something similar?
Is there any value in the referrer to the client? The host can use it for a whole range of reasons - but apart from helping the host, what is the immediate benefit to the client?
#2 Won't this be possible to bypass simply by encoding more in the domain part of the url than in parameters? So you switch from a.b.tld/foo?p=123 to 123.a.b.tld/foo ?
You're right about subdomains but they are stripping both query and path, so 123.a.b.tld/foo would become 123.a.b.tld. It's fair to assume most misplaced sensitive data will be in the path or query, rather than the subdomain.
I didn't even think about the subdomain until I saw the image that is in the article here - which literally looks like it added an arbitrary number in the subdomain, duplicating the parameter number! When I saw the image I thought the article was actually about blocking some shady way of leaking that doubleclick had invented.
To answer #1, the referrer could probably be used to make a link refer to a login page on another domain, and to then let the host redirect to the correct page after login.
There are of course ways to do this by following your suggestion #2, but I suspect this might be why we originally had the referrer header.
Third party logins are definitely a use case. But you'd think that they would aim to do it with logins.somewhereelse.com/?u=source instead of relying on the referrer field as the source to go back to! I wonder how common actual use of referrer is?
Interesting, this is what the Referrer-Policy header is supposed to do, site by site. It make sense to enable it private browsing mode, though... and then you'll see how many sites break because they use the Referrer as some kind of authentication mechanism (yes, seen in practice multiple times).
I mean we already have firefox plugins to permanently block the referrer. Which is great. But I applaud Mozilla for going privacy-first in a consumer package. I hope that eventually Mozilla will focus entirely on privacy and make good anti-tracking, anti-ads, anti-referrer, anti-cryptomining all default packages.
E-commerce checkout codes, etc. are the only reasonable form of referral. In other words, if I give you something voluntarily that tells you where I came from, fine; otherwise, why do we have so much auto-leaking built into protocols?
Was very handy to find other people linking to your site, often people with similar sites to yours.
Also I remember someone I know got an email that a page ge was linking to was about to move. I guess this was only possible because of the referer header.
This. For this you just need the origin though. There's an origin header for 10 years which makes CSRF protection so trivial yet its still not available in all browsers. Firefox can leak referer in regular mode but can't send origin? Not sure.
Anyone knows a good document on when/which browser actually sends referer header?
If you have a giant traffic spike, it's nice to know where it came from. Eg. if you get linked on Hacker News, you can go to the thread and engage with potential customers.
That's not really a direct reason for the client to send them though. There are tons of useful things about referrers to the host.
Stripping everything but the domain should be mostly ok for the client though, so if I come from a.b.com/foo, it just sends b.com as referrer? Both "a" and "foo" can hold any amount of data so those would have to go. Sending b.com should be enough to provide traffic statistics?
Referral headers can be very useful for tracking down fraud and phishing activities.
Although tracking down fraud is usually as it relates fraudulent traffic which mostly effects advertising so I imagine a lot of the commenters here would say that is not a good reason.
They are useful to prevent the hotlinking of images/video from third party domains, which can incur in significant bandwidth costs for smaller websites.
Referrer is also a client thing. Referrer based hotlink protection implicitly trusts the visitors' UAs. Same for Origin; it just offloads the actual check to the visitor. No big difference.
HTTP Auth, on the other hand, is actual password protection which cannot be spoofed by a malicious client. Very different from robots.txt.
This is just like traffic statistics - a help to the host, not the client. This is data sent voluntarily by the client so there has to be some direct benefit to the client in sending it, or at least sending any more than the domain name with out path, subdomain and parameters?
Also paywall skipping, to track what sites are bringing you traffic, to pass fields to forms to report problems, to present targeted traffic based on use, and of course, advertising.
How are those answers supposed to convince me to keep the referer setting in my browser on? In fact, they're even more likely to convince me to turn it off.
network.http.referer.(XOriginPolicy|XOriginTrimmingPolicy|trimmingPolicy) to 2
network.http.referer.spoofSource to true
network.http.sendRefererHeader to 0
network.sendSecureXSiteReferrer to false
Wouldn't this make it obvious that the user is browsing in private mode? While I get that might be preferable to leaking information, its also not an ideal solution either.
Not practically, no, since the approach is the same as setting the Referrer-Policy header to strict-origin-when-cross-origin. In most cases it can't be reliably deduced whether a visitor legitimately came from https://reddit.com/ (from the root origin) or some path on https://reddit.com/.
Disclaimer: I'm the Firefox engineer who wrote the patch and the blog post.
I'm very interested in this thread. Other replies here are correct - there are many ways that sites try to detect private browsing, and many ways they can get it right or wrong.
How do people feel about the "stealth" design goal of private browsing? Should it be a goal? What about a hide-in-a-big-crowd tactic? (E.g., how Tor tries to make all its users look identical.)
> How do people feel about the "stealth" design goal of private browsing?
I think this shouldn't just be a goal for private browsing, this should be a goal for browsing period.
Shouldn't the default be to just send the top level domain (if anything) of the source site as soon as you go somewhere else? The next site can't possibly use the complete url of the referring site for any (non-shady) purpose?
1. What does it matter if the creators of a website are aware that a user is using private browsing to view the site? In other words, what would they do with this information?
2. As it's possible to strip out referral information using other means, assuming there's a practical use for this "private browser user" information, what could the site creators do to guard against false positives?
There are some news websites that blocks private browsing[1]. Private browsing is a method of bypass the paywall, so they they make a private-browsing-wall.
probably checking whether the DNT header is set to 1, or whether a known tracking domain is blocked (firefox has tracking protection by default in private browsing).
Sites like healthcare.gov or banks should not include third party ads or analytics scripts. Referrer is not the only way to leak information from the page.
For the scope of requesting a document, there is no need of referrer nor useragent.
A lot of features/apps/websites have been built around the assumption that this information is sent, but it would be nice to start dropping it by default.
Color me stupid, but I thought all major browsers already stripped referrer info when navigating from HTTPS? The examples used don't make sense to me if that's true.
Setting `network.http.referer.XOriginPolicy` to `1` in Firefox’s `about:config` is actually a pretty reasonable choice for all browsing, and balances privacy with preventing your favorite sites from breaking.
1) It breaks some webpages, so they probably only want to have it where it's relatively clear that users will prefer their privacy over not having breakages.
2) It pisses off webpage owners, as it hampers their analytics and probably ad revenue. If webpage owners are too pissed off by Firefox or simply don't make enough revenue from it, they'll stop testing/building their webpage against it, which leads to broken webpages in this way as well.
This might be a dumb question but does whether a site uses SSL have any impact on browser behavior concerning query strings?
Seems counterproductive my browser is taking so much care to encrypt my querystrings then leaking them to any host from which the site I'm visiting happens to pull content.
Thanks, but this doesn't appear to offer equivalent features. RefControl allowed a default action overlaid by siste (pattern) specific overrides.
One could block refer(r)er altogether, and then adjust on a site/resource basis as needed.
There is one current web extension compatible extension that purports to do this, but when I tried it, it didn't want to cooperate with my configuration, despite adjustments. Further, it sucked Discus comments into its local configuration dialog/page, something that I find... sucks.
Finally, it wasn't open-source and didn't have a well-known provinance. All this didn't leave me feeling too confident in it.
P.S. uMatrix is supposed to provide layers of referrer control, but I haven't made the effort yet to switch over to it including switching some of my other points of configuration to use it instead.
I disable referers for all browsers. Firefox has them off completely, with Referer Control using a random one for Chrome. I only enable them for sites that absolutely need them (and that I need to use).
Well, yeah, the feature itself is not a particular accomplishment, really it's been in Firefox for years, you just had to enable it with some about:config-flag.
The big news is them enabling it by default in some fashion (that is when you're in Private Browsing), meaning that all users now have this, not just the 0.1% who understand referrers (and have not forgotten to enable this the last time they installed Firefox).
As a power user who knew about this, you might not particularly care, but for users in general it's great, while it pisses off webpage owners.
Then again, even as a power user it's impossible for you to know about all of these sort of config options, so you might care to use a browser which tries to help its users out while having to keep an eye on not pissing off webpage owners too much, rather than a browser that tries to maximize revenue for webpage owners while trying its best to hide all the ways it infringes privacy from its users.
For those confused as to why half the comments have "misspelled" referrer, here's an interesting bit of history:
The misspelling of referrer originated in the original proposal by computer scientist Phillip Hallam-Baker to incorporate the field into the HTTP specification. The misspelling was set in stone by the time of its incorporation into the Request for Comments standards document RFC 1945; document co-author Roy Fielding has remarked that neither "referrer" nor the misspelling "referer" were recognized by the standard Unix spell checker of the period.
Edit: Actually, that doesn't seem to be true -- although there's some usage of "referer" in English, most of the hits in Google Books prior to 1960 turn out to be for the Old French word "referer".
I don't think you can ever describe a person's name as spelled correctly or incorrectly. It's spelled how they spell it. It's their name, not a dictionary's. Variations in spelling are perfectly natural and for names in particular are incredibly common
I'm not "attacking" their parents who chose the name or anything like that, and any spelling is fine for a name, but there is a clear etymology to the name "Philip" that comes from "philos" and "hippos", someone who loves horses, and an indisputably historically correct way to spell it.
That a misspelling has become particularly common (or like for my own name, much more common than the historically correct spelling) doesn't make it anymore correctly spelt than "referer" in my opinion.
But if you disagree with the term "misspelling", I can formulate it another way: let's say that it's funny how the creative modern spelling "referer" instead of the historical "referrer" comes from someone who has a creatively spelt name "Phillip" where "Philip" was historically more common, and that both differ from the historical spelling on a double consonant. It's a much more awkward sentence though for such a trivial, passing remark.
I think it's being argued that describing a name variation as a "misspelling" -- as in, a mistake -- is incorrect. What does the etymology or historical popularity of "Philip" have to do with it? We don't know that he has that name because he truly loves horses or because his parents attempted to honor a king of Macedonia.
I know this is treading into the classic prescriptive vs. descriptive linguistic debate, but the reason why we can call "referer" a misspelling -- rather than a creative decision - is because the original authors seem to admit that it was unintentional. Fewer folks would be calling it a misspelling if the authors had meant to do it, e.g. to avoid a name collision with some other attribute named "referrer" or to honor a colleague named "Referer".
Edit: I'm glad you made this comment. I only know people with the name "Phillip" but I constantly find myself double-checking how their name is spelt before I refer to them (in text). I chalked it up to me being a bad friend but I guess it doesn't help that I might be seeing enough of the "Philip" variation to get confused.
It is very useful to know where your traffic is coming from, but that’s usually viewed at a higher level than the querystring params being shown. In some cases, this may restrict you from knowing which article the person was reading on the given site before clicking through to yours, but if that’s so important, there are other ways to instrument source tracking.