The Dirty Secrets Of SSD Benchmarking
Where is Steady State?
There are a couple of dirty secrets that most Tom's Hardware readers know, but few mainstream buyers would have ever had occasion to hear. The first has to do with performance fresh out of the box versus steady-state performance. SSD manufacturers prefer that we benchmark drives the way they behave as soon as you buy them because solid-state drives slow down once you start using them. If you give an SSD enough time, though, it reaches a steady-state level. At that point, its benchmark results reflect more consistent long-term use. In general, reads are a little faster, writes are slower, and erase cycles take place as slowly as you'll ever see from the drive.
There’s been a movement within the flash storage community to standardize performance analysis to focus on steady-state. A lot of conference presenters kept stressing this point. However, they left out one important detail: there isn’t one single steady state. Instead, you might see 4 KB random writes stabilize at one point, and 128 KB sequential writes stabilize somewhere else. You can even get more granular and figure out where 128 KB sequential writes hit their steady-state level in bursty and sustained transfers.
The main takeaway was that there isn't one moment in time where we can declare, "this drive just reached its steady-state performance level; we're ready to test it." You could benchmark a single SSD for an entire month and still not have its complete performance profile worked out. What you write, how fast you write it, how much you write, the I/O workload from the previous days and weeks and so on all change how the SSD performs.
We prefer to focus on the overall picture, including the main points that help you make an informed purchase. It’s not that other benchmarks don’t matter, but we’re trying to present SSD performance in manageable bites that don’t overwhelm. That’s why we focus on a consumer-oriented steady state reached within a reasonable period, which you can read more about on page four of Second-Gen SandForce: Seven 120 GB SSDs Rounded Up.
Consumer vs. Enterprise Benchmarking
Almost every SSD's specification sheet cites performance based on when the drive is brand new, which we've already established is only representative of what you get for a brief period. However, manufacturers blur the performance picture even more by generating all of their data using high queue depths. We don't really take issue with that in the enterprise space, because SSDs may very well encounter high queue depths when they're getting hammered by online transaction processing or some other, similar workload. Solid-state drives on the desktop simply don't encounter those same queue depths, though. As a result, most performance guidance is overstated.
We can't blame the SSD industry for representing performance this way. After all, vendors are responsible for shining the best possible light on their products, and high queue depths are, in fact, the best way to saturate the multi-channel architectures employed by SSD controllers, yielding the best performance. More important is that end-users realize they won't see those same aggressive figures on their own desktops.
This is something we’ve tried to emphasize as of late, giving you a more realistic perspective of everyday performance. And we can promise you that there are more than a couple of SSD vendors unhappy with us for running our tests using queue depths derived from real-world traces. But we answer to you, not them. And so, the differences between SSDs are fewer in our results than you might otherwise expect from comparing spec sheets. The benefit is that you don't actually have to buy the latest or most expensive SSD in order to enjoy a substantial performance boost.