Machines die. Hardware has bugs, or is broken. Things just bork. It's a fact of life.
Would I build a file storage system around btrfs? No - without proper redundancy at least. But I'm told at least Synology does.
I'm pretty sure there's plenty of cases where it's perfectly usable - the feature set it has today is plenty useful and the worst case scenario is an host reimage.
I can live with that. applications will generally break production ten billion times before btrfs does.
> Machines dies. Hardware has bugs, or is broken. Things just bork. It's a fact of life.
I know, I'm a sysadmin. I care for hardware, mend it, heal it, and sometimes donate, cann-bird or bury it. I'm used to it.
> worst case scenario is an host reimage...
While hosting PBs of data on it? No, thanks.
> Would I build a file storage system around btrfs? No - without proper redundancy at least.
Everything is easy for small n. When you store 20TB on 4x5TB drives, everything can be done. When you have a >5PB of storage on racks, you need at least a copy of that system running hot-standby. That's not cheap in any sense.
Instead, I'd use ZFS, Lustre, anything, but not BTRFS.
> I can live with that - applications will generally break production ten billion times before btrfs does.
In our case, no. Our systems doesn't stop because a daemon decided to stop because a server among many fried itself.
I have worked on and around systems with an order of magnitude more data and a single node failing did not matter. We weren't using btrfs anyway (for data drives) and it definitely was not cheap. But storage never is.
But again, most systems are not like that. Kubernetes cluster nodes? Reimage at will. Compute nodes for vms backed by SAN? Reimage at will. Btrfs can actually make that reimage faster and it's pretty reliable on a single flash drive so why not?
Well, that was my primary point. BTRFS is not ready for these kind of big installations handled by ZFS or Lustre at this point.
On the other hand, BTRFS’ single disk performance, esp, for small files is visibly lower than EXT4 and XFS, so why bother?
There are many solutions for EXT4 which allows versioning, and if I can reimage a node (or 200) in 5 minute flat, why should I bother with the overhead of BTRFS?
It’s not that I haven’t tried BTRFS. Its features are nice, but from my perspective, it’s not ready for prime time, yet. What bothers me is the mental gymnastics pretending that it’s mature at this point.
It’ll be good file system. An excellent one in fact, but it still needs to cook.
My impression of btrfs is that it's very useful and stable if you stay away from the sharp edges. Until you run into some random scenario that leads you to an unrecoverable file system.
But it has been that way for now 14 years. Sure, there are far fewer sharp edges now than there were back then. For a host you can just reimage it's fine, for a well-tested fairly restricted system it's fine. I stay far away from it for personal computers and my home-built NAS, because just about any other fs seems to be more stable.
The thing is, none of the systems I have the luxury to run a filesystem which can randomly explode any time because I pressed a button developers didn't account for, yet.
I have bitten by ReiserFS' superblock corruption once, and that time I had plenty of time to rebuild my system leisurely. My current life doesn't allow for that. I need to be able to depend on my systems.
Again, I believe BTRFS will be an excellent filesystem in the long run. It's not ready yet for "format, mount and forget" from my perspective. Only I'm against is, "it runs on my machine, so yours' is a skill issue" take, which is harmful on many levels.
Would I build a file storage system around btrfs? No - without proper redundancy at least. But I'm told at least Synology does.
I'm pretty sure there's plenty of cases where it's perfectly usable - the feature set it has today is plenty useful and the worst case scenario is an host reimage.
I can live with that. applications will generally break production ten billion times before btrfs does.