> you can change the ssh port and use a ssh key instead of a password.
I'd advice against changing the ssh port - I don't think the (small) inconvenience is worth the (tiny) benefit to obscurity.
I would always recommend turning off password authentication for ssh, though.
(along with disabling direct root login via ssh, but root-with-key-only is now the default - and if you already enforce key based login, it's a bit hard to come up with a real-world scenario where requiring su/sudo is much help for such a simple setup).
I would probably amend your list to include unattended-upgrades (regular, automated security-related updates - but I guess that's starting to be standard, now?).
You will probably need an ssl cert, possibly from let's-encrypt.
At that point, with only sshd and nginx listening to the network - avenues of compromise would be kernel exploit (rare), sshd exploit (rare) or ngnix exploit (rare) - compromise via apt or let's-encrypt (should also be unlikely).
Now, if the site is dynamic, there's likely to be a few bugs in the application, and some kind of compromise seems more likely.
Anecdotally, changing the ssh port on a very low-budget VPS is worth the effort because the CPU time eaten by responding to the ssh bots can be noticable.
This has been my experience as well. I remember having a VPS with digital ocean a long time ago and it was getting hammered badly with bots. Changed the ports, made pubkey authentication only and installed fail2ban for future pesky bots did the trick for me.
To be honest I don't think the people controlling those bots want to deal with us that makes it harder for them to gain access. Instead why not happily hammer away everyone's else port 22 with the bare minimum configuration? Those who enhance the security were never the targeted audience to begin with.
> Those who enhance the security were never the targeted audience to begin with.
This is pretty insightful. Statistically, attackers are probably mostly looking for badly configured machines which are easy to exploit rather than hardened systems that take a long time to penetrate.
State actors and obsessed attackers are different, of course. But statistically even taking care of using the simplest precautions keeps one out of the reach of the broad majority of such attacks.
I'm more familiar with AWS. There I just firewall SSH to just my IP (with a script to change it for the laptop case, or use mosh), and thus spend no CPU time responding to ssh bots.
Do VPS providers offer some sort of similar firewall service outside your instance?
I don't think low budget vps providers typically allow this. That said, fail2ban works OK, as does manual iptables (now nftables) - unfortunately /etc/hosts_allow is deprecated[1].
If you don't know that you'll be able to arrive from an IP or subnet - another option would be port knocking. (eg: knockd). Although, I'd try to avoid adding more code and logic to the mix - that goes for both fail2ban and knockd.
[1] ed: Note, the rationale for this is sound: the firewall (pf or nftables) is very good at filtering on IP - so better avoid introducing another layer of software that does the same thing.
I'm inexperienced, but relatively confident if I use an off the shelf login module to protect everything but the login page, the handful (literally) of users with creditials are internal to the organization and trusted with underlying the data anyway, and the data itself is essentially worthless to outsiders, I'm pretty safe.
My thinking is that even if I for example fail to sanitize inputs to a database or displayed to other users that won't lead to an exploit absent a bug in the off the shelf login module or someone attacking their colleagues (in which case there are other weaker links).
The organization I'm building this for has other moderately sensitive systems on an internal network, but the server I'll be managing will on the public internet. The site I'm building will export CSV files to be opened with Excel, so I suppose if the site I build was compromised it could be used to get an exploit onto a computer in the network. Still I presume if they're facing that kind of attack they'll have plenty of other weak links like documents spearphished to people and I'm pretty sure the sensitive systems are on a separate internal network.
But I also think that I would trust eg apache/nginx basic auth, more than login/session handling at the application level (php/ruby/... with users in a db).
Assume at least one user has a dictionary password, and suddenly you'll want to enforce 2fa via otp or similar - for peace of mind.
As a general rule, I tend to assume a targeted attack will succeed (no reason to make that too easy, though) - what I aim to avoid are the bots.
They'll likely be brute forcing passwords, blindly trying sql injection - along with a few off the shelf exploits for various popular applications (eg: php forum software).
I'd advice against changing the ssh port - I don't think the (small) inconvenience is worth the (tiny) benefit to obscurity.
I would always recommend turning off password authentication for ssh, though.
(along with disabling direct root login via ssh, but root-with-key-only is now the default - and if you already enforce key based login, it's a bit hard to come up with a real-world scenario where requiring su/sudo is much help for such a simple setup).
I would probably amend your list to include unattended-upgrades (regular, automated security-related updates - but I guess that's starting to be standard, now?).
You will probably need an ssl cert, possibly from let's-encrypt.
At that point, with only sshd and nginx listening to the network - avenues of compromise would be kernel exploit (rare), sshd exploit (rare) or ngnix exploit (rare) - compromise via apt or let's-encrypt (should also be unlikely).
Now, if the site is dynamic, there's likely to be a few bugs in the application, and some kind of compromise seems more likely.