We've seen SSD vendors spend tons of R&D dollars over the past few years to improve every quantifiable aspect of solid-state storage. Sequential performance, random 4 KB writes, pricing, and overall quality are all way up. But there's one aspect of SSDs that still looks a lot like the Wild West: performance consistency.
As we've noted in previous drive reviews, averages are great on spec sheets, but they don't always tell the whole story (like frame rates in graphics card evaluations). When it comes to dealing with time-critical, deterministic systems like enterprise video, it's more important to design for worst-case scenarios. Averages just aren't good enough.
Nearly every SATA-based drive we've tested, even the ones from Intel, has exhibited consistency issues at some point. Normally, hiccups are caused by the controller firmware. In some cases (like garbage collection), you're looking at the drive's inherent behavior. In other cases, poor implementation of certain algoritms is to blame. Fortunately, Intel put a special emphasis on the consistency of its SSD DC S3700, completely redesigning the firmware to prioritize even performance over hitting peak throughput numbers. The company even adds consistency and QoS specifications and definitions to its datasheets.
First, lets take a look at the performance consistency spec, according to Intel.
|Performance Consistency Specification||100 GB||200/400/800 GB|
|Random 4/8 KB Read||90%||90%|
|Random 4/8 KB Write||85%||90%|
And here's the company's definition of performance consistency from its product specification:
"Performance consistency measured using Iometer based on Random 4 KB QD=32 workload, measured as the (IOPS in the 99.9th percentile slowest 1-second interval)/(average IOPS during the test). Measurements are performed on a full Logical Block Address (LBA) span of the drive once the workload has reached steady state but including all background activities required for normal operation and data reliability"
First, we have to credit Intel for going so far as to put out a spec that even attempts to quantify consistency. The company doesn't try to cherry-pick an easy test for its specification, either. It's using 4 KB random reads and writes across the entire LBA, with all background activities active during the measurement. Even with those parameters, the SSD DC S3700 is able to achieve 90% consistency across all capacities, other than random writes on the 100 GB model.
The other new specification is Quality of Service. While performance consistency takes a one-second average, QoS shows us the maximum latency for a given percentage of commands.
|QoS Specification||Queue Depth=1||Queue Depth=32|
|Capacity||100 GB||200/400/800 GB||100 GB||200/400/800 GB|
|Reads||0.5 ms||0.5 ms||1 ms||1 ms|
|Writes||0.5 ms||0.5ms||15 ms||10 ms|
|Reads||10 ms||5 ms||10 ms||5 ms|
|Writes||10 ms||5 ms||20 ms||20 ms|
This specification is derived using 4 KB transfer sizes in Iometer, measuring the maximum time it takes for 99.9 or 99.9999% of commands to travel round-trip from host to drive and back to host. Once again, this is a great improvement over the normal average or typical latencies that most vendors specify.
There is one drawback to this type of devotion to consistency: maximum performance takes a hit. You won't see the extreme high-end numbers proffered by the quickest desktop SSDs. Intel is betting that the trade-off for consistency is worthwhile in most enterprise environments, though.