Should You Care About Over-Provisioning On A SandForce-Based SSD?
Now that SandForce allows its partners to disable over-provisioning (both Adata and Transcend choose to do this), you're probably wondering why an SSD vendor would or would not do this. After all, based on the Iometer results you've already seen, the benchmark results aren't affected much. It'd be easy to conclude, then, that setting aside capacity on a SandForce-based drive has no benefit. Might as well make that space available for user data, right?
Not necessarily. SSDs based on SandForce's hardware perform their garbage collection duties in the foreground, as you're writing to them. On an over-provisioned drive, that reserved space is used for moving data around on the drive.
In the screen shot above, we've filled Intel's SSD 520 with incompressible data. Then, we write 128 KB blocks back in a sequential access pattern. Performance starts slow, but picks up as the drive leverages its over-provisioned capacity to achieve higher transfer rates.
On the drive without over-provisioning, performance never picks up because there is no scratch space to use for shuffling data around. Once you've written to every memory cell, writing back a second time only happens as fast as the controller is able to free up capacity.
In this chart, we're demonstrating the performance of Transcend's SSD720, though the same theory holds true for Adata's XPG SX900, too. The SSD320 and SP900 would give us similar behavior. But because they employ slower asynchronous NAND, their charts shift down around 40 MB/s.
The other thing to keep in mind is that we're hitting these drives with serious workloads. In the real world, you shouldn't see anything of the sort unless your SSD is pretty much full.