Results: Tom's Storage Bench v 1.0
Storage Bench v1.0 (Background Info)
Our Storage Bench incorporates all of the I/O from a trace recorded over two weeks. The process of replaying this sequence to capture performance gives us a bunch of numbers that aren't really intuitive at first glance. Most idle time gets expunged, leaving only the time that each benchmarked drive is actually busy working on host commands. So, by taking the ratio of that busy time and the the amount of data exchanged during the trace, we arrive at an average data rate (in MB/s) metric we can use to compare drives.
It's not quite a perfect system. The original trace captures the TRIM command in transit, but since the trace is played on a drive without a file system, TRIM wouldn't work even if it were sent during the trace replay (which, sadly, it isn't). Still, trace testing is a great way to capture periods of actual storage activity, a great companion to synthetic testing like Iometer.
Incompressible Data and Storage Bench v1.0
Also worth noting is the fact that our trace testing pushes incompressible data through the system's buffers to the drive getting benchmarked. So, when the trace replay plays back write activity, it's writing largely incompressible data. If we run our storage bench on a SandForce-based SSD, we can monitor the SMART attributes for a bit more insight.
Mushkin Chronos Deluxe 120 GBSMART Attributes | RAW Value Increase |
---|---|
#242 Host Reads (in GB) | 84 GB |
#241 Host Writes (in GB) | 142 GB |
#233 Compressed NAND Writes (in GB) | 149 GB |
Host reads are greatly outstripped by host writes to be sure. That's all baked into the trace. But with SandForce's inline deduplication/compression, you'd expect that the amount of information written to flash would be less than the host writes (unless the data is mostly incompressible, of course). For every 1 GB the host asked to be written, Mushkin's drive is forced to write 1.05 GB.
If our trace replay was just writing easy-to-compress zeros out of the buffer, we'd see writes to NAND as a fraction of host writes. This puts the tested drives on a more equal footing, regardless of the controller's ability to compress data on the fly.
Average Data Rate
The Storage Bench trace generates more than 140 GB worth of writes during testing. Obviously, this tends to penalize drives smaller than 180 GB and reward those with more than 256 GB of capacity. Speaking of which, it's not a good idea to take a trace made on a 240 GB disk and wrap it around a smaller disk, say, a 40 GB disk. Going the other way, using a smaller trace on a larger SSD is no problem, but going from larger to significantly smaller leads to radically different trace timing, and thus, results.
The M550s show up above the M500s, but still appear mid-pack. Most of the drives ahead of Crucial's new SSD family probably need asterisks next to their model names, though. OCZ, SanDisk, and Samsung all employ technologies to boost performance with the equivalent of emulated SLC flash. Crucial, however, does not.
It's also interesting that only one drive ranked higher than the M550s uses IMFT-manufactured flash. That SSD owes its speed to OCZ's unique firmware. Otherwise, Toggle-mode DDR is the more prevalent performance-oriented interface.