Engineers who have worked on HFS+ believe that it actually is losing data left and right. I'm inclined to believe that they're right and that most people simply don't notice.
I used to work on the largest HFS+ installation in the world; and we saw data corruption all the time. Mostly large media files with block sized sequences of zero'd data; we were lucky in that we had non-HFS+ file systems backing us up, but deeply unlucky in that the nature of our media was such that random blocks were very much more likely to cause media-level problems rather than container-level ones, and thus were much harder to catch.
It doesn't have checksumming yet. I have a feeling it will be added before the final release since data integrity is one of the tenets they're pushing.
And the conclusion of the author itself --in the end of the post-- was that it was due to bad hardware, not HSF+.
To quote:
>I understand the corruptions were caused by hardware issues. My complain is that the lack of checksums in HFS+ makes it a silent error when a corrupted file is accessed.
This not an issue specific to HFS+. Most filesystems do not include checksums either. Sadly…
A bad filesystem would have corrupt metadata. Plain old corrupt data is the fault of the storage media, which does have its own error correction. Clearly it wasn't good enough, or the path from the HD back to the system can't report I/O errors.
BTW he didn't lose any data, since he had backups. If he had a checksummed filesystem, but not backups, he would still have lost data. Checksums, like RAID, aren't backups!