Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Does anyone actually do #27, "27. Over-provisioning is useful for wear leveling and performance" It's been my experience mucking with the innards of flash devices that they already have ~10% more NAND in them than their labeled capacity exactly for this purpose. Seems like over provisioning is bad advice unless you have a very special situation.


Overprovisioning definitely makes a big impact on the performance of all but the most recent SSD controller architectures out there, but only once the disk is mostly full.

Check out any of Anandtech's benchmarks from the past year or so. They now include graphs showing the consistency of I/O completion times; reserving more than the default ~7.5% makes the GC pauses a lot less severe and makes a huge improvement in worst-case performance. Under sustained load, having more spare area often makes the difference between always being stuck at worst-case performance and always being near best-case.

For example, under a solid random write workload a full Samsung 850 Pro will complete arount 7-8k IOPS, but with 25% spare area it will hover around 40k IOPS. That's a very enticing space/speed tradeoff, especially if you've already decided that SSDs are to be preferred over hard drives for your workload.

The default amount of overprovisioning in most drives is chosen to be roughly enough to allow for reasonable performance and lifespan (affected by write amplification), and in MLC drives usually corresponds exactly to the discrepancy between a binary/memory gigabyte and a decimal/hard drive gigabyte, which simplifies marketing. Drives intended for high lifespan often have odd sizes due to their higher default overprovisioning.


but only once the disk is mostly full

...because it's basically changing what the definition of "full" is.


Sure in a few cases, it's well known and a straight forward cost/performance tradeoff. Manufacturer internal specified over-provisioning is mainly a matter of economics. Within the technical limits of their particular NAND and controller, they pick a number that will yield the desired performance and longevity for their general target audience of the drive within the right price budget, but there's nothing wrong with tweaking that a bit if someone has different needs. More space can improve performance in some areas, in particular performance consistency and IOPSs, and increase overall drive longevity, but that extra NAND is of course not available for user use. One of the general differences in "enterprise" drives (beyond features like power loss caps) is just plain much higher factory over-provisioning.

It's not bad to know about, it's just another tradeoff case in storage. For scenarios that can benefit from IOPS/consistency or have huge, random loads it may be a very simple way to get a nice bump, particularly out of a "consumer" drive. For simpler loads it's a total waste versus more available storage, or even a negative if it would result in data getting pushed off of SSD onto something slower. The value also can vary from drive to drive too, so it should always get tested.

I agree with you though that the article should have mentioned that, like all tuning, there are no universals (or the manufacturer would have done it already), and that in general for modern SSDs the defaults are just fine unless you've got a specific reason otherwise (and can quantify the result). I suppose many programmers will be generating loads far higher then the typical consumer, but even so I suspect default will usually be the right choice.


The manufacturer may have his own reason[1] to do some over-provisioning, as a user I intentionally leave some ssd space unpartitioned to avoid the terrible performance drop when the SSD gets full.

[1] Remember this microSD article ? http://www.bunniestudios.com/blog/?p=918 I wouldn't be surprised to learn that manufacturers combine controller and over-provisioning as a cheaper and more flexible alternative to a full test of the SSD.


I used to overprovision hard drives, by not formatting the slower areas on the outside, which had slower streaming rates, when doing video streaming years ago. If I was going to do serious high write loads on consumer SSDs I would consider leaving 10% blank.


It's my understanding that most manufacturers recommend overprovisioning of 8% for consumer drives, and 28% for drives used in enterprise.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: