Yesterday,
All those backups seemed a waste of pay.
Now my database has gone away.
Oh I believe in yesterday.
Suddenly,
There's not half the files there used to be,
And there's a milestone hanging over me
The system crashed so suddenly.
I pushed something wrong
What it was I could not say.
Now all my data's gone
and I long for yesterday-ay-ay-ay.
Yesterday,
The need for back-ups seemed so far away.
I knew my data was all here to stay,
Now I believe in yesterday.
--
From usenet
My comment on the situation: Online mirrors are fine, but calling them backup is a stretch of the imagination,since you must assume that an event can compromise all data within a domain (be it The Internet, or a physical location).
A true backup must be physically and logically separate.
Which is why we have the 3-2-1 rule, not only for business but also for personal data: https://www.veeam.com/blog/321-backup-rule.html Otherwise I agree, they are not "backups", just maybe a glorified copy.
there are stronger backups and there are weaker backups, but as long as the intention is for an informational failsafe, they're all still backups. arbitrarily deciding what forms are "true" or "not true" or a "glorified copy" seems a bit silly to me. the world is just a bit more complex than that
what is a backup if not just a form of copy anyway?
An attacker may intrude your environment and slowly destroy data without you realizing. If this process takes e.g. 10 days, you need backups for 11 days to be safe.
This scenario happens often (as far as I know) with ransomare attacks (on personal devices): Encrypt least used documents first. Probably noone will realize it over weeks that data "is gone".
Online mirrors are fine if they have boundaries that make them very certainly append-only.
Opening up scp/rsync and saying "our client only writes new files" is bad. Using a dedicated stream-writing interface over TLS is probably fine.
As for the other attack vector: segregating the admin credentials so that the stream-writing interface cannot be bypassed, yeah, fun. 2FA only gets you so far.
> A true backup must be physically and logically separate.
That doesn't stop it from being targeted by hackers. No amount of hindsight will save your backups unless they are in an offline cold storage somewhere protected by men-at-arms.
I set up an append only storage for a friend recently and his son downloaded some kind of game related cheat thing online and it encrypted his harddrive, his backup usb harddrive, his cloud storage and his NAS.
The little restic backup saved him. It pushed one copy of nonsense, but kept several revisions of the old data.
On a similar note: does anyone have any experience with mdisc? They seem like the perfect solution for long time storage for me at the moment.
I use mdiscs privately as part of the 3-2-1 strategy, also keeping two writer units around and testing (mostly the writers) them like once a year. No issues so far, but the oldest are only about 5 years old...
Still cloud is better after all. 32GB is enough for all digital device and shooting 8k videos for 1000 centuries. No one should make backups. Storage expansion especially SD card must not be allowed on phone, tablet and laptop. Local storage is not secure and adding sd card to phone will introduce water leak. Meanwhile SIM card slot do not introduce water leak because obviously there is magic. Also SD slot is waste of internal space. You might ask why tablet is big as hell but still no SD card slot. Because those extra space is for storing mana to dispel water while adding the cloud subscription debuff to the users. Magic protect our phones from water, bullet, brick, bad OTA update, damage of USB port, lack of OTG functionality, USB2.0 transfer rate, terrible MTP interface, eavesdroping from wireless, etc.
I don't know why you are being downvoted, maybe people don't like the form, but the fact that device manufacturers are removing useful features such as the ability to expand the storage of your device is infuriating.
Because this makes life difficult for developers, for no good reason. Developers simply shouldn't have the access necessary to make these attacks succeed.
I find it really hard to have empathy for serious businesses who don’t have backups and are dependent on a single cloud.
Like for example if you are all in on AWS and do all your backups of your AWS systems to AWS then lose your account. Meh… your fault.
If you run a business then you have an absolute obligation to be able to instantly bring your business back up outside your primary hosting provider.
And if you’ve built all your infrastructure in a way that cannot be replicated outside that hosting provider then frankly that’s negligent.
All those AWS Lambda functions that talk to DynamoDB? Guess what…. none of that can be brought up elsewhere when you lose your AWS account.
If you are a CTO then this is your primary responsibility and priority above everything else. If you are a CTO who has failed to ensure your business can survive losing your cloud then you are a failed CTO.
Yes, but as a customer your 3-2-1 strategy should include a backup off that cloud. Not the first time, and won't be the last time a cloud provider has a catastrophic data loss incident. Relying solely on your cloud provider for backups is a risk.
You know that after fire in OVH datacenter, they asked their customers to start their disaster recovery plans - and people asked where is such option in OVH admin menu? Not excusing them, but many customers are completely clueless about backups and data security in general.
You can have X daily backups in rotation and after X days of infiltration they're all garbage because they were overwritten by the malware-encrypted code.
A backup isn't real until you've restored from it. That's why you should restore from backups regularly. Firstly so that you know the process and see it actually works and secondly you can confirm you're actually backing up what you think you are backing up.
We've all set backup scripts and forgot to include new directories or files in the configuration as time went on... =)
The parent comment is intending to remind people that many things can happen to a backup after it's done. Backups cannot be "set and forget", as just making the backup isn't enough since so many things can happen after you've taken that backup.
- Bitrot/bitflips silently corrupt your backups and your filesystem doesn't catch it
- The storage your backups are on goes bad suddenly before you can recover
- Your storage provider closes up shop suddenly or the services go down completely, etc
- malicious actors intentionally infiltrate and now your data is held hostage
- Some sysadmin accidentally nukes the storage device holding the backups or some other mistake (to summon the classic, I'm betting there are a few persons who have stories where an admin trying to clean up some leftover .temp files accidentally hit SHIFT while typing
```rm -rf /somedirectory/.temp```
and instead writes:
```rm -rf /somedirectory/>temp```
- (for image level backups) The OS was actually in a bad state/was infected, so even if you do restore the machine, the machine is in an unusable state
- A fault in the backup system results in garbage data being written to the backup "successfully" (If you're a VMware administrator and you got hit by a CBT corruption bug, you know what I'm talking about. If you aren't look just search VMware CBT and imagine that this system screws up and starts returning garbage data instead of the correct and actual changed blocks that the backup application was expecting)
Basically, unless you're regularly testing your backups, there isn't really any assurance that the data that was successfully written at the time of backup is still the same. Most modern backup programs have in-flight CRC checks to ensure that at the time of the backup, the data read from source is the same going into the backup, but this only confirms that the data integrity is stable at the time of the backup.
Many backup suites have "backup health checks" which can ensure the backup file integrity, but again, a successful test only means "at the time you ran the test, it was "okay". Such tests _still_ don't tell you whether or not the data in the backup file is actually usable/not compromised, it only tells you that the backup application confirms the data in the backup right now is the same as when the backup was first created.
So the parent post is correct; until you have tested your backups properly, you can't really be sure if your backups are worth anything.
Combine this with the fact that many companies handle backups very badly (no redundant copies, storing the backups directly with production data, relying only on snapshots, etc), and you end up with situations like in the article where a single ransomware attack takes down entire businesses.
A 3-2-1 backup strategy involves keeping three copies of your data, stored on two different types of media, with one copy kept offsite for disaster recovery.
you are still supposed to have multiple backups =)
I often have people complain after comparing my works instance pricing to other cloud providers...
Then try to explain that rotating a few dozen TB of data offsite to cold offline storage every week isn't cheap. Because unlike some vendors, we take pride in data integrity and ensuring that our DR plan is actually.... you know, recoverable :P
If they are not incremental but append only, an air gap is not strictly needed and can be used as an additional safeguard performed less frequently because of manual overhead. The crux of the matter is to assume the main system has been compromised and preventing overwriting existing data.
I would not agree with this. Append-only file systems and storages aren't a bad idea and definitely help with accidental overwrites, but these systems have been punked quite frequently in many ways, and I've worked with backup companies that home-rolled their own append-only backup implementations.
It didn't stop attackers from using extremely common ways to punk the systems even under the best circumstances for the systems. A forgotten password gets leaked, using the backup applications/storage system's own encryption schemes against the victims, just deleting entire volumes or compromising the OS on the systems, the list goes on.
I wouldn't consider append-only an anti-ransomware technique, it just stops one of many common ways of compromising data. This is good, but I wouldn't rely on it to protect against even a run of the mill ransomware scheme.
To utterly destroy an organisation you don't erase or encrypt their data. You change it. Slowly. A little by a little. A birthday here, a name there, a number ... Using the normal ways to change this data. In this way you can go undiscovered for years, employees get blamed for making stupid errors for a LONG time and there is absolutely no way to fix things, no matter what the backup strategy is.
What happened?
It is our best estimate that when servers had to be moved from one data center to another and despite the fact that the machines being moved were protected by both firewall and antivirus, some of the machines were infected before the move, with an infection that had not been actively used in the previous data center, and we had no knowledge that there was an infection.
During the work of moving servers from one data center to the other, servers that were previously on separate networks were unfortunately wired to access our internal network that is used to manage all of our servers.
Via the internal network, the attackers gained access to central administration systems and the backup systems.
Because CloudNordic says so on their temporary webpage. https://www.cloudnordic.com/ (in Danish, I used Google Translate to check and the result seems fine)
I don't think we need fearmongering about "shoddy journalism" for something so easy to check.
If not attacked and the page says not attacked: trustworthy.
If not attacked and the page says attacked: not trustworthy.
If attacked and the page says not attacked: not trustworthy.
If attacked and the page says attacked: trustworthy.
As long as the page says "attacked", it seems likely they were attacked? Why would they state it themselves if it wasn't true, losing trust for no reason?
However, there is a thing called “defacing”. In the process, the attackers share false information implying that more damage was done than in reality.
My general rule is to stop trusting a compromised digital system until I hear from a person (journalist, in this case) confirming that the control over the system has been restored.
If journalists do not verify the facts themselves or via trusted (human) sources, it’s not journalism but syndication.
Realistically, the news was published yesterday and the notice is dated a week ago. I doubt that a company of IT experts would have failed to take a fake notice down. But I stand by my assessment of TechCrunch journalistic standards.
The solution to ransomware? Backups. It's not more complicated than that. It's honestly puzzling that ransomware is the issue it is, crippling entire organizations. It just means they have inept IT teams.
Sucks this Danish cloud host provider didn't back stuff up properly.
More often than not in my experience is that the IT team wants proper backups but management baulk at the price and never authorize it. Until something bad happens of course.
maybe they were backing up their stuff properly, but backups were wiped as well. even if you have some fancy append-only storage someone still has access to it and that access can be misued.
They could have wiped through other means, e.g. through ipmi. Although I don't think that was the case.
More realistically, it probably boils down to money. I wonder what would be the cost of backing up everything to a competitor's cloud daily, e.g. one PB of data per day. I have no idea how much it even costs to have a 200 gigabit link to another data center.
I believe this is the case of "no true Scotsman". Whatever backup you propose, someone will point out that you could have done it better. You could have disconnected servers with backups from network when they are not in use. You could have hired a dedicated person whose responsibility would be to deny/delay any request from management to delete the backups. And so on.
A (national) local hosting company suffered a datacenter fire and lost pretty much all customer data except for billing (which was rebuilt as they were using third party payment processors).
After the fire the company just let people know there were no backups unless bought explicitly, as the terms of service clearly stated.
The company ia still operating (i know because I’m a customer) and not much has happened.
Yeah some customers were screwed, but it was the kind of ignorant customer that gullibility paid 10€/year for hosting (with php execution), database, domain, traffic and then also expected the same service level as something way more expensive. There’s not much around this: you get what you pay for.
Suddenly, There's not half the files there used to be, And there's a milestone hanging over me The system crashed so suddenly.
I pushed something wrong What it was I could not say.
Now all my data's gone and I long for yesterday-ay-ay-ay.
Yesterday, The need for back-ups seemed so far away. I knew my data was all here to stay, Now I believe in yesterday.
--
From usenet
My comment on the situation: Online mirrors are fine, but calling them backup is a stretch of the imagination,since you must assume that an event can compromise all data within a domain (be it The Internet, or a physical location).
A true backup must be physically and logically separate.