Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

In my particular case, no, sorry if I gave that impression. I have not seen anything yet that would compel me to use remote backup as my primary backup destination. Potential failures aside, it's slow in both directions.

My local backup strategy is a collection of external --and these days inexpensive-- hard drives as well as a large rack-mount NAS RAID array. Each of about a dozen systems has it's own local external backup drive right on the desk next to the computer. Some have dual local backup drives. We are talking in the order of $100 for a couple of terabytes today. Then, a number of systems also backup to NAS. Every so often we rotate drives for longer term storage at a fireproof external location. It'd take a lot more than Dropbox or any service having a glitch for me to loose any data.

I really don't understand folks who don't, at the very least, have one external USB backup drive on their system. On OSX you have Time Machine which is ridiculously easy to use. On Windows you can spend a few bucks and get Norton Ghost and you are good to go. All-up, probably not more than $200 per system and maybe half an hour to set it up.

Do that plus my recommendation to host your Dropbox location on a dedicated partition in order to force a copy operation during drag-and-drop (both Windows and OSX) and you will not care less about anything that happens at Dropbox or any other service.

It's about engineering, not hoping for, a system to protect your data.

As far as remote backup is concerned. I'd be interested in a system that might allow me to send them encrypted disk images on physical media for backup while providing some online access to the same.

Even with incremental backup you have to do a full backup every so often. In the case of our Windows systems running Norton Ghost, they are setup to do full backups the first day of the month and incremental backups every day after that. It's dead-easy, reliable and works great. Saved my hide a number of times.

A full backup of about 600GB happens in --I think-- about three or four hours. That's the problem with remote backup, the same full image would require a third of a year on a typical DSL connection available in the US today. Actually, it could take twice as long, two thirds of a year, because you would have to interrupt your backup in order to get your bandwidth back for use during business hours. So, if it takes you nearly a whole year to backup this much data the whole thing is just-about useless as implemented. Your incremental backups are likely to take days and you can't even consider the idea of doing full images every thirty or sixty days. That's what's broken about the concept of remote backups without even looking at the issues with potential software bugs at the various providers that could lead to data loss.

A more usable system would be one that, as I said, would receive my full images in physical media to absorb into their storage arrays for both backup and remote access purposes. If you needed to recover a few files here and there you could easily do so over a decent DSL connection. Full recovery would require physical media being shipped to you at a greater cost. Every x number of days you'd send a new full image set and go incremental after that.

The game changer here will be if we ever get to 100Gb network connectivity to the home and office. That would change the landscape in amazing ways. You could talk to remote storage probably as fast as you talk to local storage. At that point in time, having multiple redundant and geographically separate remote backup locations might very well be the most sensible approach to an organization's backup strategy. Such a system could even talk to a locally installed "backup server" in order to make sure that if connectivity is compromised in some way you still have access to your organization's data during the blackout.

The topic of backup is conceptually very but becomes really complex when you consider the multiple potential points of failure and how to deal with them.

This is why I don't consider any issues at Dropbox to be serious. I obviously don't think of them as backup. And they can't convince me to think that way no matter what they do or say. This isn't to say that I think the service is bad. Not at all. It's because I've been around and I've seen too many failures (some of my own) that I take a very careful and guarded approach to my data. And that's healthy. I use Dropbox for team communications. I almost think of it as a really neat way to "FTP" stuff around. So, instead of setting-up my own FTP server and having to manage my users and storage I can use Dropbox. No data is ever moved to Dropbox. All data is copied to Dropbox. That means that the data remains locally stored and, more importantly, locally backed-up every night. So, through engineering, failures at Dropbox or anywhere in between my DSL connection and their data centers are of no consequence whatsoever.



A full backup of about 600GB happens in --I think-- about three or four hours. That's the problem with remote backup, the same full image would require a third of a year on a typical DSL connection available in the US today. Actually, it could take twice as long, two thirds of a year, because you would have to interrupt your backup in order to get your bandwidth back for use during business hours. So, if it takes you nearly a whole year to backup this much data the whole thing is just-about useless as implemented. Your incremental backups are likely to take days and you can't even consider the idea of doing full images every thirty or sixty days.

The thing is, you don't need independent full backups within a single online backup provider, and they would probably deduplicate your backups anyway. Incremental backups work just fine, and will always get your data backed up that night unless you are the kind of person that generates many gigabytes of content in a single day.

I use Dropbox for team communications. I almost think of it as a really neat way to "FTP" stuff around.

Yeah, I can understand that. I was considering replying to one of your other posts with the comparison, but I didn't want to start too many thread at once. The problem is that when you use dropbox as a better FTP, you lose out on a lot of the benefits of syncing. No longer can you go to a different computer and pick up right where you were, if you didn't happen to copy in the files again since the last change. If you're sharing files with a coworker you no longer have any idea where the most recent version is. You don't have a full list of file versions. And your FTP-method of dropbox doesn't really have anything to do with backups. You could move files into the dropbox folder and then set it to be backed up every five minutes if you wanted to.


Obviously there are a number of use patterns for Dropbox. I just don't use it as a "live" store. The whole concept scares me. As far as moving to another computer and having access to the latest files, I can always VNC into a machine or VPN into the network if I am outside.

I just couldn't bring myself to using it that way. Our files are our work product. There is no way I could consider having the only copies of the files connected to a service that could cause total loss of data. It is, in my world at least, an absolute non-starter.

With regards to the idea of sharing files with a coworker and not knowing where the latest files are, well, that's what Git is for, isn't it?

As a matter of principle I have a bad reaction to the idea of calling a data loss event "Another Dropbox horror story". And this has nothing to do with Dropbox or any other service. Dropbox is not responsible for your data. You are. If the data is important enough that total loss would be catastrophic, what are you doing placing all the eggs in one basket? It isn't a Dropbox horror story, it's a story about someone who didn't care enough about their data to safeguard it and then the blame is shifted towards whoever last held the data. That just ain't cool. I own my fuck-ups. It isn't fun, but I stopped blaming others for my crap a long time ago. In the end it works out better that way.

Does Dropbox have issues to fix? Sure. What piece of software doesn't. Knowing that software is imperfect is more of a reason to not, again, place your valuable eggs in one basket.

I have a feeling that there are a lot of young and inexperienced people surfing HN who have never lost anything of significance. They come across someone like me who calls bullshit when he sees it and is willing to stand for what is a solid position and they don't understand it. The only tool they have to express themselves is to mindlessly down-vote cargo-cult style. That's fine, with that approach they'll learn soon enough.

I don't know of anyone --not ONE person-- with say, twenty or thirty years of successful work in computing who would trust the only copy of their projects to any one service --no matter what they claim or how good they might be. Hell, most of these people, just like me, would not trust the only copy of their projects to any single device, local or not. Shit happens. And some of us have seen it happen and have had to clean-up the mess on more than one occasion. Eventually you learn.

I do love Dropbox, but they are not going to come and redo two years worth of work if something goes wrong. And that can happen. And, if that were to happen, I would not blame them. I'd blame myself for being a moron and not having my own backups. Maybe I'll figure out a way to use their active sync technology while still satisfying the requirement for solid redundant backups. Until then, I have enough problems with other stuff to have to worry about having only one copy of our projects stored anywhere, local or not.




Consider applying for YC's Winter 2026 batch! Applications are open till Nov 10

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: