Hacker News new | past | comments | ask | show | jobs | submit login
I don't care about HSTS for localhost (github.com/ip2k)
107 points by seanp2k2 on June 3, 2022 | hide | past | favorite | 95 comments



I feel like I'm going to be denounced as a heretic, but here goes: I don't care about HTTPS for localhost

When developing locally I'll aim to run without HTTPS / HSTS etc - whilst I'm generally fairly passionate about narrowing the gap between local development and your deployed setup, using HTTPS locally often results in hours of yak shaving.

There. I said it.


I did the yak shaving and I am glad I did. I only needed to inject my own CA into Firefox/Chrome and my self signed certificates now works like any other, no fiddling with about:config, no websocket mismatches or app complaining of not running on https. I can even curl and all that since I added this CA to my machine.

edit: I only self sign localhost subdomains (app1.localhost, www.site1.localhost, etc.) and each project has its own self signed certificate (by the same CA) with needed domains (usually traefik.localhost, www.site.localhost, api.site.localhost, etc.). localhost becomes basically my presonnal tld.


And now any security mishap with your CA compromises your entire browser because you can’t just trust a custom root certificate for “*.my stuff.com” without trusting for mybank.com


Firefox appears to support[0] name constraints[1] in CA certificates. It even appears to have code that supports adding further constraints to the root certificates after they were imported, but that doesn't seem to be exposed anywhere in the UI.

[0] https://bugzilla.mozilla.org/show_bug.cgi?id=856060 [1] https://datatracker.ietf.org/doc/html/rfc5280#section-4.2.1....


That’s a small phish to spear! And if the CA very is kept on local host, compromising it means you’ve already compromised my system.


If you're that much of a target, you'll find your devices hacked soon enough regardless.

I can't speak about your threat model, but "exfiltrating my private CA keys to phish my browser" isn't really something I worry about in practice.

For those still checking certificate validity, Firefox will warn you that the certificate used is not in the system database when you click the little lock in the address bar.

That said, I'd absolutely love a system where I could restrict my private CA to certain domains.


You can use name constraints on the CA, but they are a bit hit and miss when it comes to client support.

For a local CA with the CA only on one machine you're perhaps OK if you are careful, but once you share the server with a couple of collegues you are potentially into a world of hurt.

On OSX you can choose "Always Trust" or "Never Trust" for various purposes (code signing, SSL, EAP, etc).

Why can't I have "Ask first time", or "Trust only for specific domains"

Same with built in ones. That "Hong Kong Post" root CA raises some eyebrows with me, I'd love to set that to "Ask first time" on it.


I think you can mitigate this by deleting your CA key after signing a certificate for localhost. Sure, that means you can't sign new certificates, but that's not a big deal as you can just replace the CA on your desktop when the time comes.


I do not recommend using the same browser for everyday web browsing and for development. For one thing, you don't want an adblocker or other content altering plugins on your dev browser.

I like Firefox with Tree Style Tabs and an adblocker for everyday browsing, and some Chrome derivative with no addons other than Xdebug for development. Lately I've been using the Responsively browser for dev, especially if I need to do anything mobile.


I was in your camp for long but after getting burned once I decided to change my dev env to be as close as possible to a production env.

It just takes caddy, a domain you own so you can get certificates via DNS challenge and point those domains to 127.0.0.x in your hosts file. It is not a big challenge and it is worth it once you finish setting it up.


Localhost shouldn't look like production. You should have a remote QA environment for testing deployments etc that looks as close to production as possible. Ideally other people should depend on it for testing their software so you have motivation not to break it.


This logic makes no sense because surely you want to catch bugs as early in the process as possible and the more your local environment looks like qa looks like prod the fewer issues you’ll run into when running your app in a different environment.

Why would you wait for your development cycle to slow down from seconds to minutes to catch problems that could be caught beforehand?


Your local environment is not the same as localhost. Having your dev environment set up like a production env from this point of view doesn't mean you don't have a remote test env, it means you can debug stuff that only break on that test env or even better that you finish your work without that problem even showing up.


Testing is great, but developing with the same characteristics is important: some behaviours are different when you use https.


Lots of stuff like service workers will only work over https


localhost is considered a secure context by browsers so service workers will work from there even without https.


Although this is true, it's also true that not all 'secure' functionality is enabled for localhost without HTTPS.

One such example is secure cookies.

There's a longer list here: https://web.dev/when-to-use-local-https/


The cookie one is the only semi-legit one. And it would be kind of weird that setting the https only flag wouldnt mean what it says.

Everything else on that list is you can't test https without https. How could you possibly test mixed content without using https? Http/2 is so tied to TLS that the insecure version that nobody has implemented isn't really the same thing. Etc


You’re right. But my point was just that although localhost is a secure origin, there are still differences between localhost and sites loaded over HTTPS.


I always worry someone is going to take this away, but it seems they haven't yet, and all is well so far.


localhost is in the Secure Context specification. It won’t be taken away.


Specifications can change.


Except I'm trying to debug a mobile webpage so it's not localhost


Well sure, it’s localhost. Transport security is ridiculous overkill in that case. The benefit of doing it is eliminating one more variable between dev and other environments.

One may certainly decide that benefit is not worth jumping through too many hoops, but in any case the point of doing it is not the actual TLS.

At this point browsers have all sorts of behavioral differences between secure and insecure, so you’re kinda just choosing which poison to drink: “wow this setup is a pain” vs “why does this work locally but not in staging”


Pro tip: you can use any subdomain of localhost (like app1.localhost) to get a separate origin that still resolves to 127.0.0.1

I've found this most useful for testing CORS and similar web features that depend on the origin, but I guess it could be helpful for HTTPS-related things too.


Or even better, you could just route anything to 127.0.0.x locally. E.g. replicate the actual production deployment locally.


Wouldn’t that require having an HTTPS stack and valid production certificates locally?


You don't need valid production certificates, just valid certificates for that domain name, signed by a CA you trust. The nuance is that you can use a local CA to get a valid certificate, rather than using a production one (which you don't want to have on your computer - I mean the private part of it of course)


Yes. The browser cares about names (a number is a name, but a name isn't just a number) the loopback is special (it has "Secure Context" and thus gets the same privileges as URLs with HTTPS schemes) and most systems ensure that the name localhost is always defined to be the loopback so that gets to be special, but some.other.example even if it looks up as 127.0.0.1 is not special, and the browser expects the server it's talking with to prove it is really some.other.example which it probably can't do.


That’s how I do it too. I think at some point I had to because of ServiceWorkers.


I find it kind of odd that there is no canonical, "POSIX" location for TLS host certificate. In lower parts of Web3/CSS/TLS/TCP/IP, there is /etc/resolv.conf, /var/log/messages, /etc/ssh/ssh_host_rsa_key, so on and forth, but not "/etc/hostname.pub" that Apache and nginx from ideal parallel world both looks up by default, or DHCP Option 666 "local certificate issuer IP address". I was just bored enough to watch someone setup Solaris on QEMU and that makes me think had SUNW existed today that's definitely how they would have done it.

But no, it's in wherever path specified in /etc/apache2/sites-enabled/virtualhost12345.conf, which can be /etc/letsencrypt/whatever/subdirectory/hostname.pem or /usr/ssl/certs/dynamic_file_name.cer, and in many other cases it's whatever that `docker exec stout_kaltsit cat /ssl-cert-private.key` yields. No one even agrees on whether it sits in /etc or /usr or /var or somewhere mapped deep down.

The whole TLS is just a hindrance that throws error that needs to be hastily cleared when encountered because the Web server stack is bunch of afterthoughts and that's what is showing.


HSTS doesn’t apply to websites served by directly connecting to an IP address, right?

(I couldn’t find any layman’s docs that said it in so many words, nor did I want to test it locally. My guess comes from a reading of section 8.1.1 in the HSTS RFC[1].)

I’ve been using 127.0.0.1:4000 or 0.0.0.0:4000 for local web development for a while now and have not really been held back.

Maybe people who have fancier local development setups can’t use an IP address for some reason and instead have to use localhost? But it seems to me like the easiest workaround is not to load some formatted plist file into chrome but rather to rewrite the address bar slightly.

[1] https://www.rfc-editor.org/rfc/rfc6797#section-8.1.1


That's correct, HSTS doesn't apply to IP literals. You can check with https://1.1.1.1/. They would serve you an HSTS header:

  $ curl -sS -D- -o/dev/null https://1.1.1.1/ | grep -i strict-transport-security
  strict-transport-security: max-age=31536000
but you won't find 1.1.1.1 in chrome://net-internals/#hsts, and you can still directly request http://1.1.1.1/, which returns a remote 301 response. Unlike for an HSTS domain, for which you get a local 307 response with a Non-Authoritative-Reason: HSTS header (in Chrome).


I absolutely care. About how absolutely cursed HSTS is as a concept. It's basically the EME of TLS.

The idea that they put it right in the specification that browsers were prohibited from allowing users to bypass it, even if they know what they're doing, fully moved browsers out of the "user agent" category.


Its not really obviously documented but you can type “thisisunsafe” when you get the unpassable hsts screen and it will bypass.


Is that a Chrome thing? I'm a Firefox user, generally. But my understanding is if the browser allows any sort of bypass it is not compliant with the spec.


Yes chrome only afaik. Im firefox as well, but sometimes I have to switch because the FF debugger will give up in certain cases.


Yes, I have had this discussion with Google staff. Their opinion is that "thisisunsafe" is not really a bypass even though that's obviously exactly what it is and exactly how it is used.

Historically there have been several phrases used, with changes once every few years, and the weak argument is that people who go to the bother of learning each new phrase, plus the fact the phrase tells you it's a bad idea (one of them literally) would bypass this anyway, but well... would they?

Human psychology doesn't work that way. People get into the habit of typing whatever the magic phrase is and then they're astonished that it was a bad idea even though it just said so. You can't build effective security systems on such foundations.


This is because we cry wolf too many times. When was the last time you cared about the SSH message REMOTE HOST IDENTIFICATION HAS CHANGED DANGER DANGER SPOOKY SCARY? I bet never because 100 times out of 100 that message is because of a misconfiguration on the remote host or someone termed the instance and uses the autogenerated keys.

Same with TLS errors. I have never once encountered a single instance of someone trying to intercept my connection but I’ve encountered hundreds of misconfigured but otherwise perfectly functional servers if you just ignore the errors.

You can’t really blame users when you hide literally all the details that would allow them to make an informed decision about whether they should hit “It’s Fine False Alarm” or “Oh Shit Got Em” and then be surprised when people hit the false alarm button without thinking when it’s always a damn false alarm.

We would do so much better if we had screens like, “Hey the cert the server sent is otherwise valid but expired 5 minutes ago, is that cool?” or “The server sent a certificate for bloop.domain” but you connected to “blorp.domain” with options like “Seems Sus”, “My b it was a typo” and “Damn, autocorrect gottem.”

Like we have absolutely zero reasonable sense of security and risk as anything other than perfectly secure and defcon 69.


This is where I fundamentally disagree with a lot of security folks: if the the user is really, truly, absolutely sure that they want to shoot themselves in the foot, you should let them. It's their life to live.


Only Mother Nature gets to make rules nobody can disobey, so of course the user can shoot themselves in the foot. But, we needn't provide them with the gun, or the bullets.


I can also take a picture of whatever sensitive information is on the screen / in the console / dev tools / and send it to hackers for fun.

Not sure how your security model will handle that.


Firefox doesn't allow you to bypass it at all in recent versions. One of the many, many reasons not to use it. Less power to the user.


Firefox used to have bypass functionality but I don't think it has it anymore since the TLS error pages were redesigned.


HSTS is weird. It does 2 completely separate things:

1. Automatically redirect from http:// to https:// .

2. Make it difficult to bypass the certificate warning screen.

1 I think is very good. 2 is questionable.


Yeah, I have no issue with a site indicating it only wants to talk over HTTPS. But to disable my ability to proceed even if I know what's going on with the site in question is constantly irritating.


I also dislike HSTS. I modified the .so file so that it does not recognize the Strict-Transport-Security header. (There are many other features that are also bad, but HSTS is especially bad.)

The browser MUST allow the end user to override EVERYTHING (and assume that you know what you are doing, instead of trying to do things for you differently than what you did), and then it will be good.


EME of TLS?

They put that requirement in there because HSTS is pointless without it. The website is literally saying "we will always (w/ expiration) have valid TLS, if we don't that's a problem". Allowing users to bypass it allows criminals to go "oh we're having problems with the cert, just type 'badidea' and click yes to continue to be hacked".


A user agent must work for the user, not the server. Obeying the server over the user's intent is malicious design.

And if that makes it pointless, then just remove it entirely. Believe it or not, I've never seen a cert error in the wild that wasn't an expiration of a valid cert or a misconfiguration.

The boogeyman of MITM attacks which PKI certs protect from is used to justify a lot of terrible changes to the web that aren't reflected by reality: In most cases they're just going to hack the real server and serve malicious content from your valid certificate anyways. Or they'll trick someone into giving their credentials to bonkofamerica.com because people are easy to fool. Why MITM Amazon when people will happily treat an order email sent from a Gmail account as legitimate?


> I've never seen a cert error in the wild that wasn't an expiration of a valid cert or a misconfiguration.

I have. Usually caused by a captive portal.

> The boogeyman of MITM attacks which PKI certs protect from is used to justify a lot of terrible changes to the web that aren't reflected by reality.

The move to use HTTPS everywhere was started in response to packet sniffing tools like Firesheep. That’s not a boogeyman; it’s a proof of concept that works in realistic scenarios.

> Why MITM Amazon when people will happily treat an order email sent from a Gmail account as legitimate?

So what? How about solving both problems?


Captive portals aren't malicious. They're arguably helpful. But I've never seen a captive portal using fake certificates either.


Whether they’re malicious or not, I don’t want to send them the session cookie for an unrelated website.


So scope cookies to the SSL certificate instead of the domain name, or simply offer to clear them for a domain whenever you bypass the HSTS on one.


> So scope cookies to the SSL certificate

And invalidate every user's session whenever the server's certificate is renewed??


> But I've never seen a captive portal using fake certificates either.

I never seen a captive portal using a valid certificate either. Not like I saw many captive portals (last time was like... 2018?) but still.


>using fake certificates

What's the definition of a fake certificate? Self signed? Signed by a real CA, but for a different domain (the captive portal operator's generally)?


>I've never seen a cert error in the wild that wasn't an expiration of a valid cert or a misconfiguration.

Here's one example:

https://www.engadget.com/2018-04-25-hackers-dns-phishing-sca...


Preach! Power to the User!


>Allowing users to bypass it allows criminals to go "oh we're having problems with the cert, just type 'badidea' and click yes to continue to be hacked".

Are you using this as an example of how HSTS is helping users now? Because Chrome allows you to type 'thisisunsafe' and you'll get through the warning, regardless of HSTS.


>HSTS is pointless without it

No. If I click an http:// link, I want that link to be upgraded automatically to https:// if possible so that MITMs can't read or modify the request or response. I would still get that benefit if the browser made it easy to click through certificate warnings.


If you're being MITM'd then you'll get a certificate warning. If it's easy to click through those then it's easy for MITMs to read or modify the request.


I'm talking about my own safety. I won't click through it.


If you won't click through it, then why do you want the browser to let you click through it?


Did I say I want the browser to let me click through it?

However, in general I think the browser preventing the user from doing something the user wants is a bit offputting. The browser is a "user agent". It should act on behalf of the user. If the user wants it to do something, it should do that. This case is tricky, because sometimes what the user really wants is not what the user is asking the user agent to do. It's an xy problem. I think Chrome's current behavior strikes a nice balance.


Related: Firefox will restrict the behavior of loading scripts and assets via file:// from other file:// scripts.

See https://stackoverflow.com/questions/58067499/runing-javascri..., and I think there is another flag you need to disable as well.

Why is that? I definitely get blocking file:// scripts from any other protocol, and even blocking file:// scripts outside of the webpage's directory. But if you can get a user to open a webpage on their local machine, in the same folder as sensitive data, you mine as well just get them to run an arbitrary program.


~/Downloads/ typically contains both the random downloaded web page the user just opened, and the user's bank statements.


In general that's not true. Folks save web pages to their desktop or downloads folder all the time, when opening a saved Wikipedia .html file people don't assume it can then read every other file on your desktop. Worse would be if you saved it to your Documents or ~.

Web pages are assumed to be "safe" by users, like a pdf or a png.


Safe like a PDF is probably a good equivalent :)


Please consult CVE for numerous citations on PDF file.

Furthermore, PDF also runs JavaScript, their Kind of JavaScript, depending on where you downloaded it from.


An exploit in the renderer isn't the same as supported dangerous behavior.


https://googleprojectzero.blogspot.com/2021/12/a-deep-dive-i... is the only case in the past few years that I'm aware of where a PDF vulnerability was actually exploited before it was widely patched, and that was a highly targeted attack.

https://cve.mitre.org/cgi-bin/cvekey.cgi?keyword=adobe+reade... shows 4 potentially exploitable vulnerabilities + 1 that lets the attacker check if a file exists.

I couldn't find any semi-recent ones affecting Firefox' PDF.js.


Do other Chromium browsers (Chromium, Brave, Edge, etc) have this same problem, and does this solution apply to them?


Wait, Chrome is now requiring HSTS for localhost? This is absolute madness. Somebody needs to sit down with the people responsible for this and explain to them that this is not solving anybody's problems and is just making life more difficult for everyone.


This sounds like if you create a custom CA, import it, create a cert for localhost, and then connect Chrome to https://localhost and it sends an HSTS header that Chrome accepts.

Which is something you probably shouldn't be doing in the first place.

Browsers aren't supposed to accept HSTS on self-signed certs so connecting to a self signed localhost shouldn't do this.


> Browsers aren't supposed to accept HSTS on self-signed certs so connecting to a self signed localhost shouldn't do this.

There's nothing against self signed certificates working with HSTS at all. It's perfectly fine for browsers to accept HSTS regardless of who signed it.


I think the point is that if you add a temporary exception to accept the unverified certificate, you shouldn't be left with a permanent requirement that localhost have a certificate in the future because of the HSTS header you got during that time.


> There's nothing against self signed certificates working with HSTS at all.

Actually, there is.

Section 8.1 of RFC 6797 opens with:

"If an HTTP response, received over a secure transport, includes an STS header field, conforming to the grammar specified in Section 6.1, and there are no underlying secure transport errors or warnings (see Section 8.4), [...]"

Section 8.4 then goes on to define "errors or warnings" as including any errors caused by UA certificate validity checks.

Additionally, section 14.3 opens with:

"The user agent processing model defined in Section 8 stipulates that a host is initially noted as a Known HSTS Host, or that updates are made to a Known HSTS Host's cached information, only if the UA receives the STS header field over a secure transport connection having no underlying secure transport errors or warnings."

(and then goes on to provide the rationale for this decision)

> It's perfectly fine for browsers to accept HSTS regardless of who signed it.

No, it isn't. This enables active attackers to cause a permanent denial of service even when you subsequently move out of their reach. That's the rationale.


    Wait, Chrome is now requiring HSTS for localhost?
If you point it at https://localhost/ and the cert is valid (e.g. you installed a private CA and used it to sign a cert for localhost) and the site serves an HSTS header, yes. Like any other hostname.

People who set up an HTTPS dev environment and have it send an HSTS header and then get annoyed when it behaves exactly as designed (and they know it) are rather out of touch, in my opinion. If you don't want the browser to force HTTPS, don't tell it to.


localhost isn't solely the property of whatever is serving that site at that time though. You're at the mercy of one app doing it then breaking everything else for you.


> This is designed to work with Chrome >=78 on macOS

Chrome 78 is like 25 releases ago, so nothing new here, they’re not requiring HSTS for localhost now, or ever, hopefully. Whoever the hell decides to force an HTTPS connection to localhost with an HSTS header have themselves to blame. localhost is a secure context without HTTPS.


I have a question. If there is something on localhost,why do browsers like chrome scare you into "proceed with unsafe anyways"?

Its not like I care about a mitm attack on my own computer or what if I am on 192 or 10.0 ? Isn't that inherently a non-internet access so why don't these scary warnings ingnore local devices? I know I can set up a CA for my nginx test or apache but why? What benefit other than " inculcating a habit"?

I mean I run home assistant and grafana in my local network but android tells me often its "unsafe"


So for RFC 1918 addresses (10/8, 192.168/16, 172.16/12) I would argue that it is unsafe, or at least the browser/machine can't tell that it's safe; AIUI there's generally nothing on a standard home wifi network that would stop one device (coffee maker, visiting cousin's cheap unbranded tablet) from watching all local traffic (definitely recording, not sure about spoofing). So it's an unlikely threat model for most people, but it is real.

Actual localhost traffic that never leaves your machine.... yeah, I can't think of a case where that would ever matter. If something can intercept that you have bigger problems:)


>Actual localhost traffic that never leaves your machine

Unless you run

ssh servera -L 8080:serverb:80

I sometimes do this if there's a firewalled serverb that I can't access that's running a webserver, and a non-firewalled servera that I have ssh access and can access serverb.

Then you can open http://localhost in your browser and talk to serverb. If you want HTTPS to work, then ideally you'll map serverb to 127.0.0.1 in your /etc/hosts so that its HTTPS certificate matches the host, or use --host-resolver-rules="MAP serverb 127.0.0.1" as a Chrome commandline flag. Of course then you're no longer using localhost in the host.


come on. by that point you are explaining something weird. my question is this. if i set up nginx/apache2 on my local network to serve a webpage, or i have a plex server or something similar or say nextcloud or whatever people self host these days, why should i be forced to have https?

that data wont be leaving my subnet if at all anything more so whats the threat model for a local only service?

also, i am not talking about "critical infra"


You don't need to enable HTTPS for those use cases. Your Plex and Nextcloud will work just fine.

If you configure your server to send a HSTS header, though, you're telling your browser to only trust HTTPS connections for that domain from then on. That's what's happening here, and that's something you just… shouldn't do, I guess? If I tell my browser to permanently redirect localhost to Google.com, there's no reason why I should be mad at my browser for listening to my perma redirect.

HTTP traffic is a bigger problem in huge, flat, corporate networks, running intranet services with routes spanning several locations. At any time a hacker could be listening in an exfiltrating company logins. Also think about the Snowden slides, where the NSA intercepted unencrypted traffic over Google's internal network. Local network encryption is essential in those use cases and relatively easy to set up.


>my subnet

That's not localhost. One threat model is some IoT device you have attached to your wifi gets hacked. Or your wifi has a weak password and it gets hacked. Or a guest that you let onto your wifi has a devices that's been hacked.


look, isnt the responsibility of preventing coffee maker from accessing your local data on the admin?

>So it's an unlikely threat model for most people, but it is real.

today, what kind of local network service can a person set up that people can intercept and snoop on? its not like i am talking about accessing payment gateways or anything, just local services. if there is something that "needs" security, dont you think the technically inclined would have it on that and leave the rest as is because its a bit more effort for what benefit?


It’s solving a very real problem for enterprises. They want to run local agents that serve up content with a cert chained to an installed trusted corp ca. In general localhost is a pain to reason about security-wise —- the best approach is to simply kill it (despite my personal nostalgic love for it.) Use a proxy instead and forget it exists — enterprise is now the owner of that domain.


Local agents can serve up content with proper domain names, which can resolve to anything, including 127.0.0.1. There’s exactly zero reason to use the insane setup of HTTPS + HSTS with localhost.

Plus, what’s the benefit of encrypting on a loopback connection? Who’s intercepting?


I'd be very annoyed if I were troubleshooting HTTPS/HSTS bugs and found out that certain headers were ignored because the target IP is localhost. It makes my life harder for no good reason other than to protect those who misconfigure their webserver from their own mistakes.


You have that backwards. Parent was stating internal domain names can resolve to 127 to enable HSTS. Specifically using "localhost" should be the exception, not the resolution of 127. you could shove securehost in /etc/hosts as 127.0.0.1 to turn on HSTS for example.


How about no? Enterprise can use Chrome for Enterprise. Why are they using a consumer browser?


Let me tell you a secret: They're all just the same browser. Every enterprise policy applies to consumer Chrome installs. The only real difference is that consumers often get a user-folder-installed version which annoyingly doesn't require admin rights to install, and businesses generally properly deploy an MSI file that installs to Program Files.


So, now there's a good reason for them not to be the same browser, but rather be the same executable, but with different default settings.


I am alarmed at the new behavior of Firefox in Ubuntu snapd -- "update your browser -- close this msg within 13 days to avoid interruption" a newly installed browser..

for the non-specialists here is an overview of the topic https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: