Here is my unsolicited and unprofessional advice for this type of site:
1. Set up HTTPS on every site you run. No, really. That static 10 page info site for your church group? Yup, get it set up! The no-CSS blog from 1991 (before they were blogs)? Set it up! Even if you don't use WordPress (god, please tell me you are not running WordPress without SSL), and your site never lets anyone POST/PUT/DELETE/PATCH to it, remember that what people are reading is just as important. If I can hijack your site at the local coffee shop and serve malware, your readers will not be pleased. If I manage to do this in a widespread fashion, Google/Bing will blacklist your site and nobody will get to it.
2. Get a free cert! The dirty secret is that all certs are basically equal (EV and wildcard notwithstanding, though they are an entirely different matter). There are at least two places to get decent free certs: StartSSL and CloudFlare. If you want to protect something but your 10 page church website, get a cert from Namecheap for $8/year.
3. Use HTTPS-only. TFA is a great example: it's posted on a blog that can be accessed by both HTTP and HTTPS. If you leave this configuration, it's almost as bad as not having HTTPS at all. People don't type in "https://...". They go straight to "example.com" or they'll just Google "example" and click on the first link. Set up your server to redirect from port 80 straight to the canonical HTTPS version of your site.
If you are unfamiliar with how to set this up: practice. Get a Digital Ocean box for a few hours ($0.10/hour) and a free cert from StartSSL. Use a random domain name you own (you'll need a proper second level domain, but chances are you have one parked somewhere) and try setting up a site. It'll cost you as much as a single stick of gum and you'll know that much more about how to do it.
I don't disagree with any of the points you make, but it feels like you're advocating the author skips straight to acceptance in their experience of the Grief of TLS.
If we had more people sitting in the anger, depression and bargaining phases of this, we might be in a better situation.
I count myself in the depression phase: the whole thing is a chaotic farce. Most websites achieve a worse security level now than 1995 export-grade SSL was thought to then. The CA system has thoroughly discredited itself through dozens of compromises and refusing to operate transparently. The TLS standard is gigantically complex, and the IETF continually fail to competently improve it to the benefit of its users.
To quote Network: `all I know is that first you've got to get mad. You've got to say, "I'm a Human Being, God damn it! My traffic has Value!"'
I agree that the CA system is broken and that the setup is in some cases needlessly complex. I think that more people who runs sites and going through setting up HTTPS will mean more people having exposure to if, and more change over the long term. I don't think opting out is an answer. I guess I am saying that first we collectively need to understand the mess before we can clean it up.
Honest question: is jumping through all the hoops to enable HTTPS really worth it for a personal static website? Are hijackings really that common? It seems like a lot of hassle for negligible benefits... plus it's not just a one-time thing, the best practices seem to change every few months and not following them can result in "very bad things". It's a heck of a lot easier to just run HTTP.
The first site you ever set up will take you 2-3 hours of screwing around. The next will take you about 20 minutes. After that, it'll take you 5-10 minutes per site. If you decide to go with CloudFlare, it'll take a different amount of time because you'll be setting up DNS through them as well.
It is worth it. Here's why: nobody is going to target your 10-visitors-per-month site directly. You are right, most people don't care. However, there are two types of attacks that will get you in trouble. First, where I decide to sit in a coffee shop and simply hijack every HTTP request. In this case, I am not targeting you directly, but you are susceptible. Even if you don't care about that (say, you know that none of your readers are coffee drinkers/public Wi-Fi users), a much worse situation is where a network attacker is able to attach a large number of sites hosted with e.g. a specific provider. Let's say I discover that Digital Ocean has a vulnerability where I can spoof your IP. I would then MITM all HTTP traffic to all DO hosts, and if you happen to host with them you are screwed. Note that Google doesn't care whose fault it is: they blacklist first, ask questions never.
So in short, it takes very little time/money, it's a skill you should have if you run your own site, and it's warranty against bad things happening.
Has Google ever blacklisted a site due to an attack like that? I see the risk, I'm just trying to understand how likely it is.
When I first read your scenario about hijacking my site via public wifi, it didn't strike me as very important... but after thinking about it for a few minutes, I do see the harm. Even if it's just someone screwing with my resume, I can envision situations where it could do a lot of harm.
And you do make a good point about the Google blacklist, the consequence of a Google blacklist is very bad. Even if unlikely, that alone is probably enough reason to enable HTTPS.
I've set up HTTPS several times on the small sites that I run, and probably spent about 6 hours on the process in my lifetime. Right after heartbleed came out, I switched to HTTP only. Now maybe it's time to redo the process and get it set up again...
I've only had a site blacklisted once. My father ran a WordPress blog on shared hosting and got hacked (probably weak password or vulnerability in one of the plugins or WP itself, who knows). His site was pretty quickly blacklisted, and even after he scrubbed it, leaving just a basic index.html ("we are coming back" type thing), it stayed blacklisted for at least several days. I am sure others have more experience with this, I've just been lucky.
I believe the point OP was trying to make is "if it hurts, do it more often" [1], hence it's worth setting up HTTPS for a personal static site not due to hijackings but to practice the best practices.
I agree, but with only nginx (http only) and sshd public facing services, it's usually a very quick and easy update. Dealing with https vulnerabilities can make it a lot harder to keep up, especially when the fix is not as easy as simply upgrading the software and restarting the service.
Don't backup or copy your private key. Should you lose your private key or it gets compromised you generate a new one and issue new certificates. And naturally also revoke your previous certificates.
And before you start pushing out SSL on every page you have, stop using public WiFi. They will always be insecure, no matter what you do. Tether your phone or use a VPN.
> Don't backup or copy your private key. Should you lose your private key or it gets compromised you generate a new one and issue new certificates. And naturally also revoke your previous certificates.
I disagree. While yes a new key would be ideal, generally while you are dealing with the existing problem you will want to reach for the old key. I am not talking about situations where you misplaced the key or the server got compromised. I am talking about a situation where your current server/VPS suddenly dies and you need to spin up a new one fast. IMO, in this case wasting time on issuing/re-issuing a cert is inappropriate. On top of this, I tend to generate the certs on my laptop (I trust its RNG and physical security more than I trust the server). The key is already here. Now I can encrypt it with GPG with the full force of my 4096 bit key that only I can decrypt and store it fairly securely this way. I believe this is good enough for personal and professional sites. In the ideal case scenario, I'd also only keep these on an encrypted flash drive for even greater physical security.
That's ridiculous. There are plenty of ways to lose a private key that doesn't involve or lead to compromise.
I generate and store my private keys in my secure CA environment and copy them to the server. If I ever need to redeploy them or generate a new CSR (SHA-2 anyone?), I can do it without ever logging into the server.
Good point. As pointed out elsewhere in this thread that kind of attack can be prevented by HSTS preloading, but of course that is an approach that’s not not scalable in the long run.
What could be the problem using public WiFi while pushing out SSL? I assume one would use SSH to connect to the server to push and configure SSL on it.
Why? I don't trust it for security-critical data, and I don't need it for unsecure data. The only time I ever see it being remotely useful is if I were to set up an ecommerce site, at which point you basically only need HTTPS to fend off insignificant adversaries doing payment data sniffing and for CYA. For anything else, the benefits most certainly do not outweigh the pain in the ass described in the OP.
Also, if someone manages to "hijack your site... in a widespread fashion", it probably means they've rooted your server, at which point HTTPS does nothing useful anyway, because the attacker has your privkey.
Heh. So if you only see it as useful for ecommerce, would you mind posting all the passwords you use on non-ecommerce sites? Those of your users as well?
In seriousness, I outlined all the reasons why it is a good idea already. If you disagree, that is your prerogative of course, but you provide no arguments to support your point.
Hear, hear. I seem to be in the minority, but I loathe sites that require SSL without good reason. People overlook the massive added complexity at the client end as well, because it's hidden from the user most of the time - but I've lost count of the number of times I've been trying to get something done, usually in extremis and restricted to busybox or somesuch, and been unable to fetch a resource because curl/elinks/whatever wasn't built with SSL support.
By all means set up HTTPS on your little site, but for the love of god don't require it.
Although it's kind of minor, the way the NSA hack people is by hijacking non-SSLd connections and feeding them exploit kits. So, the more SSL there is, the harder it is and the longer it takes for them to do that.
The NSA is perfectly capable of hijacking SSL'd connections as well. They don't even need to do anything nefarious; my computer has (included by default) 4 root certificates controlled by the DoD.
9. Make sure to renew your certificate ON TIME. Someone needs to be responsible and this person needs to have it in their calendar. If you're not up to that, because it is in fact your church group and you're not sure you'll be there in a year, don't do this.
Also:
> 4. Use a strong cipher suite such as this one
Check out Mozilla's best practice. They'll give you configs for different levels of support.
> 5. Use nginx, at least for front-end proxy. Your life will be easier.
Be careful. It's tricky to configure and if you cut and paste your configuration from the Internet you will open up to arbitrary code execution.
I was specifically thinking of the php matching issue, which I've seen a few too many times to be comfortable with. People shouldn't copy and paste configuration from the Internet, but they do, and I wish nginx wouldn't make it downright dangerous.
Point #3 always brings me back to MITM, and how there's almost no way for a technically-illiterate user to avoid getting tricked into using an HTTP-only site. Nobody ever notices sslstrip. And while many people might counter with 'I don't care about that use case, it's unlikely', they basically assume that nobody will ever mitm their connection, which implies that they don't need secure connections. I wonder how often people actually think about these contradictions.
That's assuming a lot. Here are the reasons HSTS will not protect people:
1. Your browser has to support it. IE still does not support it; it is 'expected' in IE 12. Also vulnerable are people with Mac OS older than 10.9, Chrome older than 4.0.211, and Opera older than 12. Most people I know (non-techies) keep their browsers for the life of their computing device. So basically that's a gigantic pool of users who do not have HSTS support.
2. When they do finally get support, websites have to enable it explicitly. Here[1] is a sample graph of how few sites actually enabled it at the end of last year (about 2 out of every 1000 of the top 1mil sites, or 0.001905%)
3. The 'max-age' is often not set very long, meaning there's increase chance for a new attack to succeed.
Definitely not a perfect solution -- all your points are definitely gaps in Strict Transport Security.
However, there's still a lot of value to adding HSTS. As for #1 and #2 and #3, HSTS is a standard that can and will be more broadly supported (and better implemented) over time probably more quickly than HTTP2 will be supported on most servers.
Personally, I'm most concerned about #4. This should be something the IETF should be working on (if they aren't already).
At the end of the day, if you've already mastered transport encryption, you may as well go forward with HSTS as well.
1. Set up HTTPS on every site you run. No, really. That static 10 page info site for your church group? Yup, get it set up! The no-CSS blog from 1991 (before they were blogs)? Set it up! Even if you don't use WordPress (god, please tell me you are not running WordPress without SSL), and your site never lets anyone POST/PUT/DELETE/PATCH to it, remember that what people are reading is just as important. If I can hijack your site at the local coffee shop and serve malware, your readers will not be pleased. If I manage to do this in a widespread fashion, Google/Bing will blacklist your site and nobody will get to it.
2. Get a free cert! The dirty secret is that all certs are basically equal (EV and wildcard notwithstanding, though they are an entirely different matter). There are at least two places to get decent free certs: StartSSL and CloudFlare. If you want to protect something but your 10 page church website, get a cert from Namecheap for $8/year.
3. Use HTTPS-only. TFA is a great example: it's posted on a blog that can be accessed by both HTTP and HTTPS. If you leave this configuration, it's almost as bad as not having HTTPS at all. People don't type in "https://...". They go straight to "example.com" or they'll just Google "example" and click on the first link. Set up your server to redirect from port 80 straight to the canonical HTTPS version of your site.
If you are unfamiliar with how to set this up: practice. Get a Digital Ocean box for a few hours ($0.10/hour) and a free cert from StartSSL. Use a random domain name you own (you'll need a proper second level domain, but chances are you have one parked somewhere) and try setting up a site. It'll cost you as much as a single stick of gum and you'll know that much more about how to do it.
Edit:
4. Use a strong cipher suite such as this one: https://support.cloudflare.com/hc/en-us/articles/200933580-W...
5. Use nginx, at least for front-end proxy. Your life will be easier.
6. Check your setup against https://www.ssllabs.com/ssltest/analyze.html. Fix issues it highlights.
7. Don't lose your private key. Don't have it live only on the live server.
8. Use HSTS (http://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security) but beware that once you have it set, you cannot go back to plain HTTP. For almost everyone this should not be a problem.