Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Anecdotally, this is untrue.

Personally, BTRFS is the only filesystem that has ever caused me any data loss or downtime. I was using a single disk, so it should have been the perfect path. At some point the filesystem got into a state where the system would hang when mounting it read/write. I was able to boot off of a USB stick and recover my files, but I was unable to get the filesystem back into a state where it could be mounted read/write.

At work, we used to run BTRFS on our VMs as that was the default. Without fail, every VM would eventually get into a state where a regular maintenance process would completely hang the system and prevent it from doing whatever task it was supposed to be doing. Systems that wrote more to their BTRFS filesystems experienced this sooner than ones that didn't write very much, but eventually every VM succumbed to this. Eventually the server team had to rebuild every VM using ext4.

I know that anecdotes aren't data, but my experience with BTRFS will keep me from using it for anything even remotely important.




Unfortunately you got what you payed for! :) No one in the Linux world appears to be seriously investing in engineering a robust and reliable filesystem, with e.g. correctness proofs. We have only hobby projects.


At work, this all happened on a commercial Linux distribution which we do pay for. As far as I recall, their support was unable to resolve the issue, hence rebuilding all those VMs. I’m not on the server team, so I don’t know many details, but I was affected by this issue and it caused a lot of grief across the organization.

So no, I don’t think we got what we paid for.


Are you sure brtfs is supported in production by your commercial Linux distribution? I would be surprised if this is true. RedHat and Ubuntu do not support it.


It was at the time, it may not be now.


Facebook literally uses it in production. There are plenty of insults we can use, but hobby project is not one of them.


Facebook presumably uses xz in production too and that is a hobby project (as we all recently found out). My understanding is that development of Btrfs was not sponsored by any company and was entirely a "community effort". It certainly would explain why it's perpetually unfinished.


I honestly find it weird when I hear about companies like Facebook and Synology using it.

Facebook could easily work around failures, they've surely got every part of their infrastructure easily replaceable, and probably automated at some level. I'm sure they wouldn't tolerate excessive filesystem failures, but they definitely have the ability to deal with some level of it.

But Synology deploys thousands of devices to a wide variety of consumers in a wide variety of environments. What's their secret sauce to make BTRFS reliable that my work's commercial Linux distribution doesn't have? Surely there's more to it than just running it on top of md.

Maybe in the years since I was burned by it things have greatly improved. Once bitten, twice shy though - I don't want to lose my data, so I'm going to stick to things that haven't caused me data loss.




Consider applying for YC's Fall 2025 batch! Applications are open till Aug 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: