Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is something that really needs to be said more in amateur circles (i.e. self-hosters and homelabbers). For these scenarios I think it's even worse, though, because it's a case of insecurity through transparency. People don't realize that all ACME/Let's Encrypt certificates are published in transparency logs that get scanned constantly, giving attackers a shiny target. I saw a reddit post recently (which I won't link for the victims' sakes) where someone had searched for Heimdall (a popular dashboard) in a web-security-oriented search engine and found a bunch of insecure publicly facing instances, some of which contained credentials.

Fixing this would be as simple as using wildcard certs, wildcard dns, and unique subdomains. Configure your web server to 404 any request without a valid subdomain (esp. www.domain.tld or domain.tld) and you've avoided nearly every web-based scan because the attacker doesn't know the host name. This is pure obscurity, but it definitely works.

Yes, host name can get leaked through SNI, but if someone is monitoring your traffic, you probably need something more sophisticated anyways.



If you set up a wildcard cert and configure your server to reject invalid subdomains, you've already done more work than it takes to actually secure your site. If you've learned to do all that, you'd just secure your site. There is no time benefit to the obscurity approach.


Why do people insist that "security" is binary, i.e. something is either secure or not? A residential building has way less security than a military base, yet can be consider secured while the base not.

The article points to one of the core equations every business should embrace: `cost_of_risk = unit_probability * unit_impact`.

If some measure can improve security metrics (time to discover, time to break, skill/power to break, etc.) then it can be considered a security measure. SSH server running on non-default port is not much more secure than otherwise equal server, but probability of random attack is lower.

> you've already done more work than it takes to actually secure your site

Finding and plugging RCE holes in every dependency you have deployed is way harder than making the host less discoverable. If running security-patched software, having non-default, hard to guess passwords are security measures that lead to lower chance of exploitation, why can't non-default, hard to discover hostnames/ports be considered security measures if they do reduce likelihood of exploitation? Relying solely on `apt upgrade` is similarly insecure as merely exposing software on non-default ports.


Security is not binary. Setting up firewall and using 192 bit basic auth password for all your web properties is however going to stop maybe 99.9% of attacks that wildcard cert + subdomains will stop and takes like, 2 hours. If you can do both, sure, go for it. But there is certain truth in applying traditional normal approach first


What binary conclusion do you see on the GP?

There is only a cost-benefit analysis there.


Weird, I do not see cost-benefit analysis in OP, which I am missing. OP discards the obscured hostname measure claiming it takes "more work than it takes to actually secure your site".

To me this reads as there is some inherently secure approach and then this obscure approach, slightly increasing security. I cannot agree with such assessment.


How do you secure your site against a vulnerability which an attacker knows, and you still don't? And for which a fix is not available yet?

No layer of defense is useless. Hiding the host name is a pretty noticeable layer.

Running a private server only inside a VPN is, of course, even nicer, when you can afford it.


If you're going to talk about vulnerabilities that you don't know about yet, then you should be looking at how you're running old kernel versions and unpatched software. Hiding your hostname doesn't protect you against anything other than insecure application code. Old httpd or openssl will still be exposed.

There are plenty of ways that your secret hostname can leak. It's just a weak, low-entropy, guessable password that can't really be rotated. And moreover, if you're going to choose one thing to do (or time box your efforts around security), it's the highest complexity I can think of for almost no real return. I mean, if you're so concerned about leaking the hostname from your HTTPS cert because your site is that insecure, why are you even using HTTPS? What are you even protecting?

Is that the best way to spend an hour securing your site? Even just slapping the six lines to put hard coded HTTP auth credentials on your nginx virtual host will do more than that.


Require TLS Client Certificates at the load balancer level (or even just HTTP basic auth).


In this particular case, the community I'm talking about is primarily (exclusively) using their http server as reverse proxy, pointing to various web-based backends. They have to set up subdomains and certs anyways, so doing it with wildcards is actually less work.

Yes, they can and should also set up things like fail2ban or crowdsec, add geo-based ip blocking, etc, but many don't because it's not fundamentally required to make their services work. Even harder still, or even impossible, is making sure all of the backend services themselves are secure.


Is a non-public hostname “obscurity” or is it a form of password/credential? Or is a credential actually “obscurity”, only so vast an attacker can’t possibly have enough electricity to shine a light on even a remotely relevant part of it.

It’s all about risk/probabilities in my view. How likely is it to find the hostname? How likely is it to find the password?

The only real differences is that there are best practices ensuring passwords are never logged in clear text anywhere, whereas it’s not the case for a hostname.


> Configure your web server to 404 any request without a valid subdomain (esp. www.domain.tld or domain.tld) and you've avoided nearly every web-based scan because the attacker doesn't know the host name.

But you haven't actually avoided it, you kicked the can down the road at best. Sometimes that's useful but it's not a sound general strategy.


> you kicked the can down the road at best

How so?


Because if a name is not crytopgraphically unguessable, then it's guessable and subject to dictionary, brute force, and other forms of attack.


If someone is doing that, you're in the realm of targeted attacks instead of scans, which is outside the scope of my original comment. It's similar to someone monitoring your traffic; as I already said, if that's the case you need more anyways.


Security is not choose your own buffet where you get to only think about scans but ignore other common attack vectors. Botnets are common enough that targeted attacks like I described are just as common as scans, so you always need more anyway.

Configuring your host to return 404 on invalid subdomains is just not a general solution, at best it just buys you some time until attackers find the subdomain, ie. kicking the can down the road, like I originally said.


So you're saying botnets will dictionary attack subdomains on every IP? Any sources?


No, I'm saying that's one of many behaviours. They mine domains and URLs scraped from email addresses, email headers and bodies, online content, and more. Your site is not "secure" when that security can be circumvented by someone pasting a URL into an email.


Reverse DNS lookup.


You mean by PTR records? I'm pretty sure most people don't have those set up..


You can actually buy "Passive DNS" records. Big DNS providers collect all the answers they learned while serving, deliberately without recording who asked and the answers are aggregated and available for purchase.

So if Sarah in accounts once went to secret-webserver.internal.example.com from her laptop at home before turning on the VPN by mistake, her upstream DNS provider will tell any attackers with some $$$ that secret-webserver.internal.example.com existed, when it existed, what the A or AAAA records said and so on.

Targeted attacks will know about secret-webserver.internal.example.com even though only *.internal.example.com is listed in the CT logs.


Which is security by obscurity?


They beauty with special ports are, much less bloated logs.


Do people using high ports actually check their logs enough for it to matter?


I do. I see no SSH attempts on the active high port. None. It may only be a matter of time of course. I continue to see French metric tonnes of attempts to subvert mysql and the Web.

We've been on the shifted port for more than 5 years.


I can confirm it makes a huge difference for SIP as well. Toll fraud attempts are not making the logs completely unreadable anymore when using a non-standard port.


> People don't realize that all ACME/Let's Encrypt certificates are published in transparency logs that get scanned constantly, giving attackers a shiny target.

FWIW, this is true of all public certificates right now, regardless of issuance method (ACME, manual, etc.) or CA (ZeroSSL, LE, DigiCert, etc.). I don't point that out to be pedantic, just to emphasize that that information is going to be out there when someone grabs a cert, regardless of how they do it. :)

> I saw a reddit post recently (which I won't link for the victims' sakes) where someone had searched for Heimdall (a popular dashboard) in a web-security-oriented search engine and found a bunch of insecure publicly facing instances, some of which contained credentials.

There were some instances recently of the same thing happening with Wordpress installations, as the default Wordpress installer would go get itself a Let's Encrypt certificate before the user had completed setup and set an admin password on the install. No vulnerability necessary, just hop onto it and set the admin password. I suspect this is going to be a frequent vulnerability discovered as more things bundle "get me an LE/ZeroSSL/etc. cert" into their software/OS installers.


I have an idea I want to pass by you, since you seem to understand the importance of this better than some others. (At least as far as my under educated opinion on network security goes) (I'm a hobbyist, self taught most things.)

So lets say you are going to be running a home server to be setup as a read only server to the outside world, but write capable through a separate port connected only to a laptop that has no internet access (or very restricted) which also has the nicety of being so obsolete it doesn't have IME or any other intel idiocy backdoors attached to it.

Would you still put a hardware firewall between each of these connections? And if so, would you also run it through a VPN on the read side of the server?

I personally don't trust VPN's, since I see them as middlemen you pay to pretend they don't keep logs of anything. Of course there is always the whole argument of 'not having anything to hide, so no worries'; but I see it as false, since the whole point of using a VPN is to hide your bits from attackers and snoops. Even if it's legitimate/legal data.

So, what would you do to avoid using a VPN, provided you can't own the VPN instance somehow somewhere due to being a bit of a cheapskate? Would some basic OpenWRT firewalled routers be enough for your purposes (and thus mine possibly) or would you go with some more complex setup where a person has to trust yet another company to not be trying to hijack data somehow?

Server intended:

Opteron build, DDR3 tech. 6 cores, hyperthreading (if any) disabled. All forms of speculation turned off. All that jazz.

1 nic port is to be setup to be downloaded data only, no upload allowed.

Other nic port is access point for SSH via old laptop setup for security purposes.

Everything running on linux, as much as possible. No windows allowed.


"Write/Read-only" doesn't really make sense to me in this context. What services are you running? Are you just trying to lock SSH behind a single laptop? That seems like overkill to me.

If your laptop and your server are on the same network, which presumably they'd have to be if the laptop has no internet access, you shouldn't need any kind of firewall or VPN.


I would be hosting an FTP service for my files for my own use in other locations. So that would require some 'read' access from a network. So that network connection would have to have internet access somehow, thus the firewall and possibly VPN. These are not incriminating files in any way mind you. I just am wary about things like packet injection, and other sneaky practices that miscreants use.

I would also be hosting a webpage or two, for blog and possibly web-shop purposes. The blog would again be "read-only", but the web-shop would require some semblance of 'write' permissions available for users. So the blog would share the 'read-only' connection ideally. The Web-shop would share the write capable connection instead.

Finally, the laptop being able to SSH into the server solely is for security purposes due to not wanting to use any form of IPMI due to some security concerns over it. I would instead being using a dedicated network card for just its purposes only. This laptop would not connect to the internet through anything, even the server. No shared connections between the network nics at all.

And I realize it may seem overkill to some people, but I don't care if it is overkill. It's when people get sloppy and cut corners that backdoors and security vulnerabilities arise. IMHO.

If I had a million dollars, I would have the most secure server in the world, lol.

The firewalls/VPN's are essentially there to act as a stop-gap measure just in case anyone decides to poke their nose in where it doesn't belong. Partially to catch them in the act, partially to stop them in the act. Ideally.

Here is a simple text explanation of sorts of my setup I have in mind.

- Nic 1: Blog/FTP, Read only. No copying files to the FTP, just copying files from it. You can only read the blog, not comment, or anything like logging in. The only person who ever needs to 'log in' is me, from my laptop.

- Nic 2: Web-shop and maaaybe a game server for testing purposes.(Considering making a simple game that will need some net code tested in the future.) This will have full read and write capability, since it will need to. This is the network that will require all the extra firewalls and VPN connections, if I use them at all. The other one might be able to get away with not having them, but this one will need them in my mindset on the matter. Logging in is definitely a thing on this part of the server.

This server will have (and maybe I should have mentioned this before) a virtualized instance for each service. This way I can sandbox each, and kill each sandbox if ever needed due to whatever malicious actions some dingus decided to do.

The laptop is essentially going to be my monitor, keyboard, and mouse; so I don't need to run multiple of each for yet another machine. (I have 2 desktops, and another laptop. I need to simplify things down a bit, even if this seems more complex, lol.)

All of this is getting its own intranet essentially, completely separated from my main internet connection. It will also be getting its own business connection instead with a static IP address for any sort of connections to the outside world. The only way my two networks will ever talk to each other, is either through the internet itself, or via a firewalled connection between the intranet I have setup, and my other computers. In this way, it will act like a local NAS for my other computers, but also for when I am out and about, and need a certain file suddenly.

I should also mention I tend to live with roommates, so I like having an extra layer of security here and there when doing so, since you never know when your roommate is going to try to do something sneaky. Like my current one who decided to give our password to the neighbors downstairs... and across the wall... Why? Because they lied to him and said they pay for the internet here too.(They don't.) Or so he claims. Quite frankly, I have found out rather recently because of this and some other things that he has a habitual need to lie and deflect. Fun stuff.

Again, this may all seem like overkill to some people, but I have long learned from experience that what one person considers overkill, another considers underkill. I would much rather do things to a point where people go "jeezus" than be the one going "ah damn".

With that note, there will be absolutely zero windows operating systems on this machine, and any machine that directly connects to it, like my laptop; will also be running non-windows environments.

The machines that do need to run windows, due to things like my capture card from Avermedia not supporting linux basically at all... they are going to be locked behind the firewalls, and allowed to connect only to the basic internet connection I already have setup. Everything else is linux. Everything. Even my 'other' laptop that currently has windows on it, only has it, because it came with it. That changes, very soon.

And besides, you wanna see real overkill?

I'll be setting up my own version of Kali essentially on the first laptop for SSH and stuff into my server, so I can also do security audits. But it's either going to be Arch based, or Gentoo based. Why?

Because I don't trust the folk who made Kali, otherwise used to be known as Backtrack. Why?

Because they still use torrents, and not magnet files, to start. And while even Arch has a way to be used on Windows now; I can at least install it via Bash on my own without needing to use some pre-made packaged installation. Hence why I might move on to Gentoo.

And I realize that no OS is perfect, and security flaws exist everywhere.

That's why I am going overkill. Also, this is how I learn things. By doing them. And I basically want to learn how to make some of the most redundantly secure servers, so that people who come to me for my services get something they can trust isn't going to be easily hacked by some script kiddie.


What do you mean with the not trusting VPN to keep logs, etc.? It's your VPN, you host it, why would you care if you log youself or not?


I guess there is a misunderstanding here.

Yes, if I own the VPN in question and no one else has any access to it, or maintains it; then there is no issue.

But I was talking about paid service VPNs. The kind you see being advertised on YouTube and etc. I don't trust those one bit.


On a technical term, if I were to find a 404 for a domain, I know there is a webserver there. Is this really helping all that much?


Yes, because you would still have to know a valid hostname to actually get anywhere. This of course assumes no bugs/vulnerabilities in the web server.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: