Do you mean because they offer poor support, or because there's usually large volumes of data involved, or both?
Cassandra, for instance, provides an easy snapshot API, and you can back up the snapshots using whatever infrastructure you'd normally use to back up TB of data. (If you don't have such an infrastructure, yeah, that's a problem, but not Cassandra's fault. :)
Edit: I should point out that since Cassandra supports multiple data center replication already, I'm having trouble picturing a scenario where you want to do anything w/ the snapshots besides just leave them on the Cassandra nodes themselves (plan extra HDD capacity as necessary depending on how long you want to keep them). But some such scenarios probably do exist.
Replication != backup, if you/someone accidentally/maliciously screw up your data with the API, the screw ups are replicated as well, especially with Cassandra, where there is no builtin data versioning.
Backups are essential when you're doing major upgrades or data migrations/mangling that can fail.
OTOH, data export is not an inherent problem of distributed database either. If enough people want the feature, they can be built without too much fuss.
Obviously. The context was using snapshots as backups -- you can keep them around indefinitely, space permitting, and if you're using Cassandra's multi-datacenter features they're automatically "remote" as well.
> especially with Cassandra, where there is no builtin data versioning
At the risk of belaboring the obvious, versioning != backup, either. :)