How are the handling this today? You can treat an existing HD as a key-value store where the keys is the location on disk and the value is a sector of binary data. Conceptually there's no difference.
The answer is: If you need those capabilities to offer up a traditional file system, you do as you do today: you layer it on top.
But many systems don't, because they already re-implement reliability measures on top of hard drives, as we want systems that are reliably available in the case of server failure too.
E.g. consider something like Sheepdog: https://github.com/sheepdog/sheepdog
Sheepdog is a cluster block device solution with automatic rebalancing and snapshots. It implements this on top of normal filesystems by storing "objects" on any of a number of servers, and uses that abstraction to provide all the services. Currently sheepdog requires the sheep daemon to run on a set of servers that can mount a file system on the disks each server is meant to use. With this system, you could possibly dispense with the filesystem, and have the sheep daemons talk directly to a number of disks that are not directly attached.
For sheepdog RAID is not really recommended, as sheepdog implements redundancy itself (and you can specify the desired number of copies of each "block device" ), and it also provides snapshots, copy on write, extensive caching and support incremental snapshot based backups of the entire cluster in one go.
So in other words, there are applications that can make very good use of this type of arrangement without any support for raid etc. at the disk level. And for applications that can't, a key value store can trivially emulate a block device - after all sheepdog emulates a block device on top of object storage on top of block devices...
You could also potentially reduce the amount of rebalancing needed in the case of failures, by having sheep daemons take over the disks of servers that die if the disks are still online and reachable.
The biggest challenge is going to be networking costs - as I mentioned elsewhere, SSDs are already hampered by 6Gbps in SATA III, and 10GE switches are ludicrously expensive still.
Comments along the lines of "Backups? Snapshots? RAID? How they handling this then?"