What I found most interesting about the review is that Apple chose not to implement file data checksumming for the reason that the underlying hardware is very 'safe' and already employs ECC anyway.
Which is silly, and fails to isolate problems when/where they happen. Pretty much every significant level should have its own checksum, in many cases hopefully an ECC of some form. Hardware has bugs/failures as does software. What is particularly evil is when they both collide in a manner which causes silent/undetected corruption of important data for long periods of time.
That's not the only reason, though. There's other factors going into that decision that make it totally rational:
APFS isn't designed as a server file system. It's meant for laptops, desktop and most importantly (to Apple) mobile devices. Note that most of the devices are battery powered. That means "redundant" error checking by the FS is a meaningful waste.
That's not to say they might not add error checking capability in the future, but it makes total sense to prioritize other things when this file system is mostly going to be used on battery powered clients basically never on servers.
Actually the reason for it is that lower layers already do checksuming and generally at that layer you don't get scrambled packets. You only lose packets which happens when there's congestion.
Alternately, just look at "netstat -s" for any machine on the Internet talking to a bunch of others. Here's the score for the main web host of the Internet Archive Wayback Machine:
3088864840 segments received
2401058 bad segments received.
One of the key innovations in ZFS is storing checksums in block pointers which is something that cannot be done efficiently outside the file system. Storing checksums elsewhere is far more complex and expensive.
It tells you that your file is corrupted. You can then restore from backups, re-download, or take some other corrective action, such as delete the file, reboot the machine, re-install the operating system, or play Quake 2 to test your RAM and graphics.
Never underestimate the value of a reason to play Quake 2.
The average user might have no redundancy, but they still ought to have a backup. Checksum failure tells them they need to restore.
At the very least, a checksum failure might tell them (or the tech they're consulting) that they have a data problem, rather than, say, an application compatibility problem.
"Why is my machine crashing?" "Well, somelib.so is reporting checksum failures" is a much better experience then "weird, this machine used to be great but now it crashes all the time"
Today you can verify backups on OS X with "tmutil verifychecksums", at least on 10.11. The UI to this could be improved, but user data checksums don't necessarily need to be a filesystem feature. On a single-disk device, the FS doesn't have enough information to do anything useful about corrupt files anyway.
> On a single-disk device, the FS doesn't have enough information to do anything useful about corrupt files anyway.
Some filesystems can be configured to keep two or more copies of certain filesystem/directory/etc. contents. Two copies is enough information to do something useful.
Well, Apple is moving in the direction of syncing everything with iCloud - iCloud Drive has been around for a while, and Sierra adds the ability to sync the desktop and Documents folder; of course on top of long-existing things like photo sync. If the file was previously uploaded to iCloud, there is redundancy, and you definitely don't want to overwrite it with the corrupted version.
How big an issue this is in practice I don't know.