>> I don't know what the intended use case actually is,
>Generally speaking, if you've got a database system of some sort and want to write data to a file.
But database systems have been around for years without such an interface, and can't they basically saturate a storage subsystem?
You can always saturate a storage subsystem -- add more clients (assuming you don't saturate the CPU, the CPU's memory bandwidth, or the network interface -- any of which can happen if you put a high-end storage device on otherwise typical hardware). But what you get is higher than the minimal possible latency.
For example, suppose you send a bunch of write operations to the disk and then send an fsync. Well, if those write operations happen one after the other (figuratively) (because there's a bunch of them), their actual completion time would on average be half that of the actual waiting time all of them must suffer through.
Now suppose you've got the ability to do fine-grained fsyncs on particular write operations, efficiently. It would still be useful and result in improved latency if the disk or OS knew that getting block A on disk didn't matter to the process until block B was also on disk, and took advantage of that fact. And it would be extra-useful if the disk or OS knew that block B had to be written after block A, because then you would be able to save a round-trip or save on CPU bandwidth necessary for marking or checksumming blocks to a sufficient degree that you can determine upon startup whether they were completely and correctly written.
But database systems have been around for years without such an interface, and can't they basically saturate a storage subsystem?