You can have X daily backups in rotation and after X days of infiltration they're all garbage because they were overwritten by the malware-encrypted code.
A backup isn't real until you've restored from it. That's why you should restore from backups regularly. Firstly so that you know the process and see it actually works and secondly you can confirm you're actually backing up what you think you are backing up.
We've all set backup scripts and forgot to include new directories or files in the configuration as time went on... =)
The parent comment is intending to remind people that many things can happen to a backup after it's done. Backups cannot be "set and forget", as just making the backup isn't enough since so many things can happen after you've taken that backup.
- Bitrot/bitflips silently corrupt your backups and your filesystem doesn't catch it
- The storage your backups are on goes bad suddenly before you can recover
- Your storage provider closes up shop suddenly or the services go down completely, etc
- malicious actors intentionally infiltrate and now your data is held hostage
- Some sysadmin accidentally nukes the storage device holding the backups or some other mistake (to summon the classic, I'm betting there are a few persons who have stories where an admin trying to clean up some leftover .temp files accidentally hit SHIFT while typing
```rm -rf /somedirectory/.temp```
and instead writes:
```rm -rf /somedirectory/>temp```
- (for image level backups) The OS was actually in a bad state/was infected, so even if you do restore the machine, the machine is in an unusable state
- A fault in the backup system results in garbage data being written to the backup "successfully" (If you're a VMware administrator and you got hit by a CBT corruption bug, you know what I'm talking about. If you aren't look just search VMware CBT and imagine that this system screws up and starts returning garbage data instead of the correct and actual changed blocks that the backup application was expecting)
Basically, unless you're regularly testing your backups, there isn't really any assurance that the data that was successfully written at the time of backup is still the same. Most modern backup programs have in-flight CRC checks to ensure that at the time of the backup, the data read from source is the same going into the backup, but this only confirms that the data integrity is stable at the time of the backup.
Many backup suites have "backup health checks" which can ensure the backup file integrity, but again, a successful test only means "at the time you ran the test, it was "okay". Such tests _still_ don't tell you whether or not the data in the backup file is actually usable/not compromised, it only tells you that the backup application confirms the data in the backup right now is the same as when the backup was first created.
So the parent post is correct; until you have tested your backups properly, you can't really be sure if your backups are worth anything.
Combine this with the fact that many companies handle backups very badly (no redundant copies, storing the backups directly with production data, relying only on snapshots, etc), and you end up with situations like in the article where a single ransomware attack takes down entire businesses.
How did the saying go? You don't have backups until you've successfully restored from them or something like that. =)
Basically any 3-2-1 system is Schrödinger's backup until you've actually used it.