Page 1:Extreme II, The Sequel From SanDisk
Page 2:A Guided Tour Of SanDisk's Extreme II
Page 3:Test Setup And Benchmarks
Page 4:Results: Sequential Performance
Page 5:Results: Random Performance
Page 6:Results: Tom's Storage Bench
Page 7:Results: PCMark Vantage And PCMark 7
Page 8:Results: Power Consumption
Page 9:Not Extreme To The Second Power, But Close Enough
Results: Tom's Storage Bench
Storage Bench v1.0 (Background Info)
Our Storage Bench incorporates all of the I/O from a trace recorded over two weeks. The process of replaying this sequence to capture performance gives us a bunch of numbers that aren't really intuitive at first glance. Most idle time gets expunged, leaving only the time that each benchmarked drive was actually busy working on host commands. So, by taking the ratio of that busy time and the the amount of data exchanged during the trace, we arrive at an average data rate (in MB/s) metric we can use to compare drives.
It's not quite a perfect system. The original trace captures the TRIM command in transit, but since the trace is played on a drive without a file system, TRIM wouldn't work even if it were sent during the trace replay (which, sadly, it isn't). Still, trace testing is a great way to capture periods of actual storage activity, a great companion to synthetic testing like Iometer.
Incompressible Data and Storage Bench v1.0
Also worth noting is the fact that our trace testing pushes incompressible data through the system's buffers to the drive getting benchmarked. So, when the trace replay plays back write activity, it's writing largely incompressible data. If we run our storage bench on a SandForce-based SSD, we can monitor the SMART attributes for a bit more insight.
|Mushkin Chronos Deluxe 120 GB|
|RAW Value Increase|
|#242 Host Reads (in GB)||84 GB|
|#241 Host Writes (in GB)||142 GB|
|#233 Compressed NAND Writes (in GB)||149 GB|
Host reads are greatly outstripped by host writes to be sure. That's all baked into the trace. But with SandForce's inline deduplication/compression, you'd expect that the amount of information written to flash would be less than the host writes (unless the data is mostly incompressible, of course). For every 1 GB the host asked to be written, Mushkin's drive is forced to write 1.05 GB.
If our trace replay was just writing easy-to-compress zeros out of the buffer, we'd see writes to NAND as a fraction of host writes. This puts the tested drives on a more equal footing, regardless of the controller's ability to compress data on the fly.
Average Data Rate
The Storage Bench trace generates more than 140 GB worth of writes during testing. Obviously, this tends to penalize drives smaller than 180 GB and reward those with more than 256 GB of capacity. Further, the average data rate is based on total busy time. Divide the amount of data read and written by the busy time, and you have a MB/s metric. Busy time is merely time in which the drive was performing an operation.
Most of the time, host I/O activity is a constant, low-level background drone, punctuated by spikes of more demanding I/O at higher queue depths. The average data rate is heavily weighted in favor of light I/O activity, with only a small portion reflecting higher demand.
SanDisk's Marvell-powered drives show up at the top of our chart, though they fall short of first place.
The 120 GB version lands in fifth place, but that's a super-impressive showing for a modestly-sized SSD. It holds 70 MB/s over the 120 GB Intel SSD 525.
Service Times and Standard Deviation
There is a wealth of information we can collect with Tom's Storage Bench above and beyond the average data rate. Mean (average) service times show what responsiveness is like on an average I/O during the trace. It would be difficult to plot the 10 million I/Os that make up our test, so looking at the average time to service an I/O makes more sense. We can also plot the standard deviation against mean service time. That way, drives with quicker and more consistent service plot toward the origin (lower numbers are better here).
More important, these service time metrics are heavily weighted in favor of intense drive activity, where higher queue depths are observed. Busy time is simply the time a tested disk was performing any host-initiated activity. Consider a period of one second during which five I/O operations are simultaneously executed. If each operation took one second, five seconds of service time would accrue during that period, while only one second of busy time is incurred.
Service time is arguably a more important metric, since periods of rapid activity are more difficult for slower SSDs to accommodate.
The above screen shot shows the cumulative I/O of our trace. Writes are consistent, picking up at a slow rate during this time slice. Reads spike quickly over a short period of time. That initial spike, in red, is a demanding period during which large amounts of data are transferred rapidly.
The SanDisk SSDs are quite the fastest, but they're not far behind. OCZ's Vertex 450 and Vector serve up I/O more quickly, as the two larger Extreme IIs show up in third and fourth place. The 120 GB variant is nestled between Seagate's 600 and the SSD 335s.
- Extreme II, The Sequel From SanDisk
- A Guided Tour Of SanDisk's Extreme II
- Test Setup And Benchmarks
- Results: Sequential Performance
- Results: Random Performance
- Results: Tom's Storage Bench
- Results: PCMark Vantage And PCMark 7
- Results: Power Consumption
- Not Extreme To The Second Power, But Close Enough