I wouldn't say fail2ban is bad advice per se. The kind of companies running deployments that do auditing on iptables are going to be very different to the types of companies that benefit from dynamic rules created by services like fail2ban.
There is a gulf of difference between hardening a Linux server for an independent web shop vs running Linux at Google. And this article very much feels like it's aimed at sysadmins of the former rather than SRE's of the latter (the fact that they're not even running configuration management like Puppet is a dead give away of that fact)
> I wouldn't say fail2ban is bad advice per se. The kind of companies running deployments that do auditing on iptables are going to be very different to the types of companies that benefit from dynamic rules created by services like fail2ban. […] There is a gulf of difference between hardening a Linux server for an independent web shop vs running Linux at Google.
I agree that there is a difference between running Linux at a FAANG and running Linux for an independent web-shop.
However, my advice was targeted at the hobbyists like me who likes to run their own webserver. (Independently from my employer) And I think it is appropriate for independent web-shop as well.
Auditing is not just reserved to bigcorps, I personally like to log diffs between "nft list ruleset" and "cat /etc/nftables.conf" on my personal servers. If you run fail2ban this becomes impossible.
Also, IMHO, fail2ban doesn't really solve the problem, a botnet attack could try to bruteforce your SSH. All it does is try to prevent one person from trying too much, it can also lock you out during an emergency. spiped is, IMHO, easier to setup and cheaper to maintain. It also provides a higher degree of protection. (As I explained before, it is a 256bit combination port knocker)
Yes everyone should use key-based authentication. What fail2ban and other firewall styled security measures do is to move the point of contact on your network.
1 - You want to limit the number of times that SSHD initializes the connection handshake, this initialization period is when/where 0-day exploits can get through.
2 - With active auditing you can add the banned IP's to your edge device. Odds are that a legitimate IP won't be trying to SSH into your systems so block everything from them. I go one step further and share that banned IP list on all my edge devices.
Since we're talking about two different points in time that are nearly a decade apart, lets be clear about the context here. spiped didn't exist when that article was written (or if it did, it certainly wasn't mature). Back then fail2ban was a pretty good recommendation for the domain in question. Which was my point.
If we shift the focus to 2021 then I'm more inclined to agree with you that it's less important. However I still disagree that it is "bad" advise and think you're overstating the reasons why it should be considered "bad advise". I'm not going to discount the problems you raise but I believe them to be less significant than the benefits fail2ban brings. To be clear, I do also agree that the benefits it brings are also small, however the risks are smaller.
The problem with the discussion here is you're focusing just on SSH. There are clearly a number of ways one can protect SSH without fail2ban. Much better ways in fact. But fail2ban isn't an SSH firewall, it's a log watch. If you have a machine exposed on the internet with no other firewalls between the internet and your server (eg no AWS security groups, router firewalls, etc) or even if you do have other firewalls in place but have several services running off the same host, fail2ban can protect you against some of the more casual directed attacks. eg you can set it up as a WAF and have all services blocked if your Apache/nginx logs detect suspicious activities.
Also your point about botnets are true but the point of security is to slow down attacks in the hope of preventing them. There isn't a silver bullet, only layering hurdles. fail2ban does slow down the smaller scale bot net attacks.
Let's also remember that there is a fair amount of bots out there that aren't part of botnets but rather just scripts run by opportunists. fail2ban can cut down on that annoyance as well.
Going back to your point about auditing, most independent hosts wouldn't even think to run auditing. The fact you do only demonstrates your enterprise background. And I'm not even convinced your example even adds any value since if nftables.conf is on the same host as iptables then it's not unreasonable to assume any successful penetration that has rewritten your iptables rules might also rewrite your nftables.conf too. In fact the first thing they'd do would be to check what agents are running for auditing and configuration management; and then kill them. I also think you're overstating the difficulty of working around dynamic deny lists -- it wouldn't take a complex script to filter out the rules added by fail2ban and frankly the rules you'd be more interested in auditing would be the allow rules anyway.
That all said, I don't disagree with your point about fail2ban (in 2021) being cargo-cult. I do disagree with your point that it is bad and your reasons for why it is bad. But I do agree that it's benefits are largely marginal in most cases.
> Since we're talking about two different points in time that are nearly a decade apart, lets be clear about the context here. spiped didn't exist when that article was written (or if it did, it certainly wasn't mature). Back then fail2ban was a pretty good recommendation for the domain in question. Which was my point.
That is simply not true. spiped 1.0 was released in 2011, 2 years before the article was written. And it was used in production on tarsnap's infrastructure since it is the same author. It had 4 minor releases by the time the article was written. [1] And it was in debian by the end of the year 2013. [2]
> > spiped didn't exist when that article was written (or if it did, it certainly wasn't mature).
> it was used in production on tarsnap's infrastructure since it is the same author. It had [3 minor] releases+ by the time the article was written. And [the initial release] was in debian [unstable] by the end of the year 2013 [after the article was written].
You're not exactly making a strong case there but fine, I'll concede it did exist and had some undefined degree of maturity that we're never going to prove nor disprove. :)
+ it was 3 minor releases not 4. The 4th was released in April, the article in March. But in fairness to yourself there were also 2 patch releases you didn't count.
fail2ban basically just cleans log files for you - and gives someone else control over your iptables. And in the days of botnet attacks, it doesn’t do as much as someone might think.
Moving the ssh port (even without port knocking) does a lot more to cut out log messages.
Anyway, I'm not here to advocate that everyone should install fail2ban tomorrow. My point was just that fail2ban wasn't bad advice in 2013 and isn't really bad advice even now. Sure, there are better tools out there for hardening services but at least fail2ban doesn't break those other tools. So there's nothing stopping you having a layered approach if you want.
And that's the crux of it for me. "Bad advice" would be something that hinders security whereas fail2ban does add to it. What is in contention is the significance it adds and this is where people have gotten hung up on SSH. For example fail2ban can work really effectively when you have multiple services running off the same host (eg HTTP(S), SMTP, and SSH).
The problem is most people just look at the default config and say "there's better tools for SSH" -- which is true but it also overlooks a lot of what fail2ban offers.
But as I said, I'm not an advocate for fail2ban. I just think some of the comments here against it are overstated. If someone wants to run fail2ban it wont harm their security. It might even enhance it depending on how they've set it up.
Somehow, it never occurred to me to have both IPv4 and IPv6 for the regular services and only binding SSH daemon to an IPv6 address only. Thanks for the idea!
> The simplicity of the code — about 6000 lines of C code in total, of which under 2000 are specific to spiped (the rest is library code originating from kivaloo and Tarsnap) — makes it unlikely that spiped has any security vulnerabilities.
spiped might be great, but I found the above on their website. The fact that it has 6k lines of code does not mean that it lacks security vulnerabilities... at all. It does not make it that unlikely either. You still have to audit it. Less LOC just means it will consume less time to do that, but it is of no guarantee that it is more unlikely to have security vulnerabilities.
There are also pam modules that can dynamically block repeated failed ssh login attempts. Pam_shield, for example defaults to blocking by null routing the ip, but you can drop in whatever action you want. There are other similar pam modules as well. I like a pam based approach since it isn't trolling log files, but directly controlling the auth.
Wireguard is also a good alternative. But you need to connect to a VPN every time you want to SSH.
Here is my personal reasons why I use spiped:
* spiped is transparent by using ProxyCommand[1]. This allow me to do "ssh host" and thanks to my ssh_config, it just connects.
* spiped can be run in a very hardened way.[2] It just needs to listen() to a socket, connect to another one, and access a key file. Wireguard needs complex network access, it needs to create interfaces and open raw sockets.
* spiped is much simple to manage, just run a daemon. With wireguards there are two possibilites:
** Every host runs wireguard, you might need to connect to multiple hosts at the time, you need to manage internal IP conflicts, etc...
** One central wireguard server, you have a single point of failure, and can't ssh anywhere if this host is down.
Don't get me wrong, I love wireguard, use it all the time as a VPN, but I don't think it's appropriate as a layer of protection in front of my SSH server.
Both Wireguard and Spiped are written by very smart people.
On my two VPSes I have one of them run a WireGuard server, and both of the VPSes have their sshd bound to the WireGuard interface only.
At home I have a computer running a WireGuard server. Two of the other computers at home are clients of both the WireGuard server at home and of the WireGuard server VPS. I can connect directly to the WireGuard VPN at home from anywhere and ssh into the other machines on their WireGuard interfaces of that VPN, but if the WireGuard server at home is down I can connect via the WireGuard VPN that runs on my VPS and still ssh into the other machines at home that way.
I also have WireGuard running on a physical server in a data center that I manage. On that server I use WireGuard only because it makes connections much more stable than connecting via ssh directly.
My three different WireGuard servers all use different private range IP address subnet ranges, so there is no conflict. I use WireGuard for communication between my own hosts but not for tunneling other traffic.
WireGuard is great and the perfect solution for a small number of machines at least. And I am sure that if you have a lot of machines you could come up with some suitable setup using WireGuard even then.
Not OP, but I was wondering why use spiped instead of simply sshd with passwords disabled. If anyone else was curious, this is what I found on the spiped website:
"You can also use spiped to protect SSH servers from attackers: Since data is authenticated before being forwarded to the target, this can allow you to SSH to a host while protecting you in the event that someone finds an exploitable bug in the SSH daemon -- this serves the same purpose as port knocking or a firewall which restricts source IP addresses which can connect to SSH."
Since that explanation is somewhat terse, and I don't know anything about security, let me ask a few questions.
Am I right that the failure mode spiped protects against is someone finding an exploit that allows them to bypass ssh logins that are set to (for example) public key authentication? So if one is not worried about this, there is no point?
Further, am I correct that what spiped does in this scenario is add a second layer of encryption, so that one must first bypass spiped in order to attempt an exploit against the ssh daemon? Then, in effect, spiped acts as a small, isolated, and auditable "condom" that can be used with any public-facing service?
What I read from that paragraph is "Spipe will block unknown computers from accessing your server's SSH (like a firewall). This provides an extra layer of security (equivalent to such a firewall) in case somebody finds a flaw on ssh."
I didn't dig into it enough to be sure, but it looks to me that spipe uses the same encryption as ssh. So, it won't protect you against crypto attacks, just restrict what computers those may come from.
For any other kind of service, spipe will tunnel it under a layer of encryption. Quite like you can do with bare ssh, but spipe is built for it and thus is more usable on that task.
If I'm reading correctly, spiped uses Diffie–Hellman for public key cryptography, while contemporary best practices suggest using elliptic curve crypto with shh (e.g. see [0]). So, for the truly paranoid, it might also provide some protection against crypto attacks too?
Note that the NSA can break 1028-bit DH [1], but spiped uses 2048-bit.
I am also not a security expert. But you seem to be correct in both cases. The spiped website also has an example of encrypting SMTP traffic between two servers in an spiped condom.
I wrote a small tool that's similar to fail2ban. I had it put the IPs into an ipset, and had my firewall rules static. Just had a rule to match the ipset.
Everytime I read something about Linux server hardening, I get more confused. We're lacking a clear and simple, modern guide on how to do things. I know, every setup is different, but there should at least be consensus for a fresh installation.
Also, do I really HAVE to change something so that it is secure? Isn't a Ubuntu server secure out of the box? With a strong, unique root password of course.
The problem is servers can be provisioned a number of ways:
* manually (like this guide)
* via CI/CD using tools like Packer
* Cloned (eg CloneZilla, or cloud snapshot)
* via configuration management (eg Puppet, Chef, Ansible, etc)
* via other initialisation methods such as CloudInit
Aside from the manual option, there’s no wrong way to any of these. And some of these approaches compliment some of these other approaches too. Many of these approaches will have a multitude of different solutions available that differ significantly in set up.
A lot of the time it boils down to preferences as much as it does best practices.
As for why servers aren’t locked down more from the outset. Some distros are. And there’s images of popular distros that have been pre-hardened for you too. Ubuntu isn’t the best for secure defaults but it’s target audience is more diverse than RHEL (Redhat Enterprise Linux). And as I’m sure you’re aware, security is often a trade off between convenience. So Ubuntu takes the approach of being slightly more convenient for the average user at the cost of being less secure by default.
It's hard to make a guide because the main action to get security is removing or not adding things. And as you need stuff installed on your system, the security advice becomes specific to what you have installed and what are your goals.
Even your question if it's already secure by default is meaningless if you don't say what you are using the computer for and what kinds of threats you are protecting against.
I think people working at RedHat are more competent in moving security forward on Linux than what Ubuntu does. Ubuntu hardly innovates at all. Its target market seems to be desktop users (or server admins that are only familiar with the Desktop version). Personally I wouldn't put Ubuntu (or any other distribution) on a server without an elaborate playbook to tailor it to my needs (on Ubuntu that playbook is always more complex from my experience). This is where Ubuntu fails for me because it makes some weird assumptions as to what I want in terms of security (which are absent in Debian). YMMV.
Although I think that a distribution's goal should be accessibility and configurability - in that regard all of them don't prioritize security features as much as I'd like to see (but knowing myself I probably would complain the second these features become too opinionated - which they most certainly would - which is why I think Debian does the right thing with not making opinionated assumptions).
Ubuntu compared to Debian standard install is more bloated, interim releases are much buggier, and Ubuntu LTS is less stable than Debian stable. Ubuntu's root certificate store is constantly outdated (though the same issue might also be on Debian). Their apparmor configuration lags behind, ... whatever is good they usually inherit from Debian.
All distributions could do more to lock down processes with seccomp-filters in systemd. Would be interesting to see what lynis⁰ discovers when comparing a fresh server install between Ubuntu and others. In over 20 years I have seen some real shit-shows in production with all distributions except Debian (again ymmv).
Jason Donenfeld, the creator of Wireguard said about Ubuntu on the latest¹ SCW podcast:
> Ubuntu is always, a horrible distribution to work with, ...
> Well, they [Ubuntu] sort of inherit from Debian, but they're like not super tuned in to what's going on and like not really on top of things. And so it was just always, it's still a pain to like make sure Ubuntu is working well. but I don't know, it's not too much interesting to say about the distro story, just open source politics as usual.
while somewhat anecdotal I trust that Jason knows what he is talking about having been on the linux security kernel team for ages and familiar with the quirks of various downstream vendors. His development cycle for WG is: implement -> decompile -> formal-verification -> rinse/repeat :-/
All of Linux security is a shit show. This is why grsecurity is charging money for it's service.
>Its target market seems to be desktop users (or server admins that are only familiar with the Desktop version)
Uhh what? Isn't it's largest target cloud/server distro deployment?
> Ubuntu's root certificate store is constantly outdated
Uhh for me cacerts updates what twice a year? Certainly it's a lot easier for me to keep it updated on ubuntu than rhel/centos.
>Their apparmor configuration lags behind, ... whatever is good they usually inherit from Debian.
Apparmor and SELinux are objective failures for the most part. The entire point of snap/flatpaks is to hide away the nonsense configuration in favor of an actual permission model. I would say snaps are actually enabling apparmor to be used and enforced unlike the generic apparmor profiles generated.
>Jason Donenfeld, the creator of Wireguard said about Ubuntu on the latest¹ SCW podcast:
What specific aspects is he referring to here? Wireguard has been baked into the kernel. I can understand packaging updates being a mess, and updating universe/lts but that is problematic for every Linux OS out there.
This is precisely why snaps were introduced. You now have apparmor/seccompf enforced permission model and an easy way for developers to directly push to multiple Ubuntu versions without having to worry about OS compatibility.
the premise for my reply was security not market share.
just because something is popular does not imply a good security posture. In fact most popular things are dumpster fires from an infosec perspective.
what I'm saying is: familiarity with Ubuntu desktop translates easily into let's install this on a server.
All of AppSec in Linux is hard. SELinux/AppArmor/firejail/systemd-hardening especially cost effort.
if you think snap/flatpack are better go for it - for me they are a major reason to stay away from Ubuntu in production. But I'm not the boss of you.
And yet there is utility in using ubuntu because it's a shared platform that many tools are developed on. It is mostly debian, but it is not exactly debian. Since ubuntu LTS latest is the de facto linux default its shortcomings fall away for swiss-army development.
De facto Linux for who? I've sold on prem apps to enterprises and startups for a while and the majority weren't Ubuntu. It was a mix of Amazon Linux, CentOS, RHEL and Ubuntu or Debian.
In containers I most often see Ubuntu-minimal or Alpine.
And while Ubuntu is represented in both those groups its not clearly de facto anything.
The only place I'd personally argue you really need to run Ubuntu (unless you really want to spend time hacking desktop configuration) is on a laptop.
But even then there are a large group of people who do run things other than Ubuntu like PopOS or Mint or Fedora or whatever other new distro there is.
Every time I see Ubuntu listed as the "de facto standard" or similar, I realize that I've never seen an Ubuntu server in production.
They're definitely a popular solution, and I'm not sure if I should be surprised I've never seen one, or if maybe it's region / industry specific.
RHEL? Yes. Lots of CentOS, most now looking at Rocky and Alma. A few Gentoo and Arch boxes at smaller businesses. Been logged into the odd BSD, AIX and HP-UX machine before.
Ubuntu? No... Never seem to stumble upon SUSE either, for what it's worth.
I used to be a SuSE fan until 2003 when I switched to consulting. I became a SuSE Gold partner peddling Suse Enterprise / OpenXchange (SLES/SLOX) and they sent us with pre-alpha grade quality to do digital transformation to companies with +5K employees.
Most of their tools were cardboard cut-outs with severe bugs and lacking functionality. I did this for 6 months losing 3 key clients that were important for my survival and almost went bust.
They have lost the plot the moment that they introduced yast2 (their only ever value proposition at the time compared to other distros was yast) everything there went downhill since.
I haven't seen a SuSE in the wild since the same time. SAP / salesforce seem like a good fit for them. They're equally dependent on consultants like me whose jobs is to perpetually apologize to customers. I don't think SuSE has much of an impact outside Germany.
Ironically Android is the Linux distribution with more security knobs turned on (LinuxSE, seccomp, HWAsan, FORTIFY_SOURCE, userspace drivers,...) without being really Linux.
Do not lock down to your IP address. Home IP's change all the time, and in proper security there should be no other way to access the server other than proper ssh authentication or physical access. There is no good reason to be doing it in this context. If SSH turns out to have a massive vulnerability that bypasses keyauth then every service on the net will be torn down.
Some people recommend running a VPN server and then using SSH over VPN for "improved security", but pretty much every VPN apart from WireGuard has a pretty poor track record there.
SSH is in all likelihood the most secure server software that you can have on a Linux box. Everything else you put in front of it is likely to be a downgrade.
As you essentially say, WireGuard is great. I firewall off direct SSH and first use WireGuard to connect to the server instead.
One advantage is that if your firewall is setup right it's completely invisible, as unauthenticated UDP packets are dropped, as is the case with any other, unused, UDP port.
I still configure SSH to best practices just in case a configuration blunder inadvertently causes the firewall to accept connections.
Most people who tried hosting anything are aware whether they are or are not using a static IP address for their home or small business connection. It's usually a paid extra, even available on some ISPs that only give out IPv6 addresses with a CGNAT.
If the IP address doesn't change very often, it's not a bad idea to set up a dynamic DNS script and base your allow list on that subdomain rather than the raw IP address.
I lock down to my IPv6 prefix just to cut down on the log spam of all those failed attempts. Seconds after running up a server and it's full swing of brute force SSH login attempts. Maybe once journalctl gets some better exclusion filters I won't care so much so I can see what's actually going on in the logs.
> If SSH turns out to have a massive vulnerability that bypasses keyauth then every service on the net will be torn down
those seem to contradict each other.
i agree that black-/whitelisting should not be the center of you security architecture but it sure helps in the scenario of authentication bypass vuln.
If you have multiple systems anyway, then having a light-weight jump box (or 2 for redundancy) may be worth setting up. Then limit 'internal' hosts to only allow connections from the jump boxes / bastion hosts.
There's truly no fool-proof way to go from 1.2.3.4 to US, UK, CN, etc. IP addresses are constantly changing hands. I'm in the Southern US and yet a residential ISP I had showed up as coming from Montreal, Canada for years in many geo IP databases. A friend's house down the street sometimes gets mistaken as a Brazilian IP address.
Almost every time I've used a WAF's geo-IP blocking tool I've either personally experienced or had customers complain about being blocked incorrectly.
If you're dynamically getting IP addresses and you're allow-listing based on country of origin, expect to get locked out eventually even if you're sitting in the same place.
That's just a PTR record though. While scarlet.be implies the organization controlling it is in Belgium, that's not necessarily a guarantee the actual device using it is in Belgium. scarlet.be could deploy a box in Ghana or Chile and have its PTR record updated to something.dsl.scarlet.be. There's no actual enforcement that the device is in some physical location.
Loads of IP addresses for cloud providers ultimately resolve to things like amazon.com or google.com, does that mean those requests are from the US because it ends in .com?
I am not looking for a failproof way to ban all non-relevant ip's.
I'm looking for a method to exclude 99% of traffic based on IP ( if possible) and that i know how i can get in if the IP changes and it isn't updated automatically on the server ( as a failsafe).
That usually doesn’t matter that much. You can get into the console of the node from whichever cloud company you’re renting it off and change it there.
What has helped me more than fail2ban with reducing login attempts by many orders of magnitude is changing default SSH port from 22 to something in 10000-30000 range.
Additionally it might be a good idea to forbid password logins altogether by adding this line in /etc/ssh/sshd_config:
PasswordAuthentication no
Of course you should make really sure to actually have a working public key in your users "~/.ssh/authorized_keys" file and/or in "/root/.ssh/authorized_keys" otherwise you might lock yourself out of the server.
But the point here is: given the choice, you should never log in regularily with a ssh password if you can also use a key.
Just be aware if you do this and your cloud provider only offers direct terminal (eg. via VNC) as a fail safe you'll be unable to use your certificate in case of some problem with your private key or a firewall issue blocking SSH. A reasonable middle ground might be use a certificate as your daily driver and keep a 100+ long random character password as a "break glass" backup.
I think most consoles that cloud providers offer are attached via virtual serial consoles (ttys) and not via SSH. So you can disable passwords for SSH but still use them via the cloud provider remote console.
At least for KVM based virtual servers that I have this is the case.
"direct terminal" access, even via VNC, ipmi, whatnot would still allow one to login locally as root, "PasswordAuthentication No" only affects sshd, not pam.
I caution the 100+ character password for this use case. Some VM / VNC combos don't have clipboard integration. Diceware is sufficient and imo the right choice for any password that might have to be entered by hand.
Newbie question here: how is a private key stored in a device I can easily lose more secure than a (long and sufficiently random) password that I've memorized and can type down only when intended?
For one a key is not transmitted over the network but the bigger reason is that most people don't use sufficiently long, random and unique passwords. If you are running a server where only you SSH in and you use a long random and unique password you are probably fine but for most people it's just easier to use keys at that point since it is not a lot easier to use long random and unique passwords than it is to use keys.
One upside to keys is also that since the server does not have your private key you don't need to rotate it if that server is hacked so you can reuse the same key for multiple servers and services. If you reuse the same long random password it only takes one of those servers/services to be hacked for you to be compromised on all of them.
Adding to that, some servers might have a secondary user with a weak password that was created by an installer or an admin for testing purposes. Disallowing password login prevents others from exploiting these accounts.
If you plan to store your private key on a device you can loose the key itself should have a password too. So the attacker needs still a password to unlock the private key.
This is actually a good idea in general. Securing the private key with a password.
Well it kind of does.. A password is validated, and if you lose it there is usually some recourse. Reset or whatnot. It may be a hassle but always possible somehow.
If you lose a passphrase, no one can help you even if you hit HN front page and /r/all with a sob story. So backups and availability have a different cruciality.
Also if you store a private key on the same medium as a password store with weak encryption or key that contains the passphrase, they key can't be considered as strong anymore.
There are practical reasons to make a distinction and mistakes can be expensive.
passwords are a symetric key, hence if the server is compromised, so is the password. with asymetric keys, a compromise of the public key is no problem.
but you are right, key-files on a disk are more vulnerable to theft than secrets in your head. keyfiles with a password ontop are most secure but also most uncomfortable.
> passwords are a symetric key, hence if the server is compromised, so is the password
Pretty sure that’s not how it works, iirc passwords are stored one-way encrypted. And if it were true, then anyone with root access to a box could comprise every other (Unix) user’s key, which seems like a potentially bigger problem…
Passwords are (or rather should be) indeed stored using crypt. However at login the provided password needs to be compared to the hashed one, which means the clear text password needs to be rehashed. I am not sure this happens on the client.
> If it did then the server’s password file would effectively be plaintext.
Send seed and hashing parameters to the client, then client does hashing, client sends hash, server compares hashes. It's vulnerable to replay attacks, but it's the same with client sending plaintext password to server (assuming that you're not using SSH or similar).
Quick google led me to RFC4252[0], section 8 of which (as far as I understood) describes ssh auth sending password as UTF8 plaintext string (and the whole packet is encrypted at transport layer). While passwords in /etc/shadow are hashed, if someone got access to your server he can just put malicious listener that will catch this UTF8 string.
I'm not a SSH guru, so if I'm mistaken please shout at me ;D
A Password-Authenticated Key Exchange (PAKE) attempts to address this
issue by constructing a cryptographic key exchange that does not
result in the password, or password-derived data, being transmitted
across an unsecured channel.
I wouldn't worry about storage. Anyone with root access can modify the sshd daemon (along with imap, pop3, and whatever else) to log all the passwords received.
It's a good idea to encrypt devices you could potentially lose anyway. Besides private keys, they probably contain session tokens, API keys, and other things--especially those saved by the web browser (cookies, cache, local storage)
Keep in mind the danger that if the SSH server crashes other non-privileged users on that box can launch a fake server on that >1024 port to take its place.
But unless the non-privileged users have access to the ssh key files - definitely not allowed in any sane set-up - their MITM sshd will be throwing big, obvious error messages at most of the users. (Which is the mechanism protecting you against MITM's via all sorts of "intercept the packets" network attacks.)
My servers only have `root` and my own sudo user, (and other default system users). I also run all apps on Docker. I don't think this would be an issue for me.
I do the same simple trick but I'd suggest 1022 instead. It's barely used, and it's below 1024 range meaning no other non-root actor on your system can listen on it and start harvesting credentials.
There is a difference between ssh disabling passwords and local accounts being disabled.
Say ssh disallows password login but I know the root password, if I ssh on to the box as another user I can then su to the root user. If the root user does is locked I can't do this.
Security measures to deploy depend on the risk level involved, e.g., potential costs of being hacked.
Measures like SELinux, grsec, fwknop, snort, IDS (tripwire or Samhain), HSMs, hw entropy sources, split SSH/TLS, and microservices compartmentalized into VM containers rather than Docker have their place.
What I'd like to see is a modern guide for setting up and operating a cluster of hosts that does not rely on any specific provider settings. Say you want to run a cluster of Ubuntu servers, maybe exclusively with a workload scheduler like k8s, maybe with a mixture of nodes, how do you set it up securely and consistently, how to you apply updates, provision users, deploy applications, and how do you centrally log and alert on events (systemd logs, docker logs, auditd). Bonus points if you there are pointers about how compliant that setup is wrt to modern compliance requirements.
I know it's a lot to ask, but maybe there is such a guide available that does not just fall back to talking about provider-specific features (e.g. IAM).
ansible. And similar tools, but lack experience outside of ansible. You can see Ansible a a tool to automate the install/configure/update process that you would do manually on a single server. Then you can apply this "playbook" to any server.
You just need an ssh connection to the target servers, with python installed on them. Of course, you have to write rules for setting up a server, provisioning users, monitoring (deploying Prometheus, pushing logs to a central server...). There various plugins for integrating with providers, but the basic features are provider independent.
Ansible is far from perfect (dependency on python, inconsistent syntax, abuse of aliases, missing a strict mode...), but it's rather easy to learn and I've used it successfully (at a small scale).
fail2ban is unnecessary if a non-standard port is used. Even a sub-1024 SSH port gets extremely little traffic with spurious login attempts just once per day or every few days and most of these aren't going anywhere (admin:admin). Similarly I don't think for personal servers and the like there is much point in disabling root login, though I disable password auth in SSH as a general rule. A firewall on a server itself should not be necessary in most cases, because unneeded "listen everywhere for everything" services should not be running in the first place. If this is managed by multiple people, the firewall should be external to the server so that the same person who "just wants to run this service for a test real quick" can't "change firewall policy real quick".
I suppose that depends on what you think of as an "login attempt". Is opening a connection a login attempt? I would say it isn't. Is sending some random protocol header a login attempt? Doubtful. Is failing to negotiate a login attempt? Again, I'd say no (most likely a port scanner looking for old/vulnerable servers). Is SSH-1.5-Nmap a login attempt? I don't think so. As we have disabled password authentication, a client can't actually try to do a user/pass login, so what can't happen, isn't.
These things show up, but are completely irrelevant to security.
* use -t ed25519 to generate keys, much more efficient for same security compared to RSA
* don’t use ufw. It easily becomes a big mess and is a pain to manage with ansible. firewalld is a much better high-lever firewall. Preferably with nftables backend.
If you have a bit bigger fleet and manage a CA you could look into using signed SSH certificates instead of public keys. That way you can provision access centrally without adding individual keys to individual servers.
It’s all still what I’d call “good advice if you refuse to take some better advice”. The caveat at the beginning acknowledges that this is a pragmatic approach rather than the best approach, and I think in the intervening time I’ve become more convinced that the better approach is the only approach: namely to automate a lot more of these things, which is alluded to at the end.
I’d also ditch the use of any shared credential other than the emergency root password, which should be locked away and not actually known by any people. Your mechanism for syncing ssh pubkeys (which, btw, isn’t specified in the article, which in my experience means it doesn’t really exist :D) on the shared account should instead populate the user keys directory and there should be one logon per user.
> Enable Automatic Upgrades
I can't count with my fingers alone how many times things broke because I `apt upgrade`d without thoroughly reading the change logs. Or even when reading the change logs something still gets past me.
Having auto upgrades on a server is not a good idea.
Who cares about fail2ban? Unless you're worried about DOS, it seems pretty silly. Similarly with a firewall, assuming you have NAT and a firewall on your router.
On the other hand, backups as an afterthought is what leaves you paying ransoms. I prefer to think about backups directly when setting up the machine, since grouping data directories can help a lot with backup strategies later on. Of course, making sure you're the only one on the machine is step 1, but at least I like to set up backups before placing any serious data on the machine, it's a part of the initial setup for me.
> Of course the first-5-minute title is hyperbolic
I don't think it is. I've managed a server directly connected to the internet with a US government IP, and it was being port scanned from a Chinese IP within minutes of being turned on. If you are a target, then there is an adversary out there that is patiently waiting for the opportunity to exploit an unpatched vulnerability in new installs, as if your security is otherwise good it might be how they get their foot in the door on your network.
(In our case I really did have a "5 minute plan" to login as soon as the fresh install was booted, setup a firewall, lockdown the ssh server, and install fail2ban ASAP. I'd then check system logs to see if anyone got in before proceeding. Time was of the essence.)
no one in that scenario would not do things manually like in the article.
but if doing it, then at minimum you should use an custom install media with latest packages bundled and all the configuration already backed so you hit the ground with sane defaults and cover the first 5 minutes from this articles during install time.
also in any install i would always do a netinstall to get any updates between media generation and install time, so you should always have the latest and greats at install time.
That would leave the installer exposed though for the duration of installation. I typically did installs disconnected from the internet for that reason.
yeah.. any realistic case that is how you would do it..
but the scenario i was replying was install a server and immediately start it with a public facing IP before updating..
if i had to do that with no other sane option.. that is how i would do it.. custom install media with latatest patches bundle and ore-configured as much as possible..
But i agree, i wold not install a public facing server while it is public facing, i would install it offline or in a private network, update, configure, secure and then expose it..
Because in the most common use-case it allows for the the same functionality as PasswordAuthentication, so if you want to disallow password-based logins, it also has to go. Note that newer openssh version (don't remember how new) renamed this to KbdInteractiveAuthentication. So check your documentation,
and double-check everything you read on the internet.
This is what I do in 2021:
* Set up spiped[1] in front of SSH
* Install and setup nftables[2].
* Lock down every service as much as possible in systemd[3]. (If the service is built-in the distro, just use drop in files[4])
[1] https://www.tarsnap.com/spiped.html
[2] https://wiki.archlinux.org/title/nftables
[3] https://ruderich.org/simon/notes/systemd-service-hardening
[4] https://wiki.archlinux.org/index.php?title=Systemd&oldid=704...