Storage Bench v1.0 (Background Info)
Our Storage Bench incorporates all of the I/O from a trace recorded over two weeks. The process of replaying this sequence to capture performance gives us a bunch of numbers that aren't really intuitive at first glance. Most idle time gets expunged, leaving only the duration that each benchmarked drive was actually busy working on host commands. So, by taking the ratio of that busy time and the the amount of data exchanged during the trace, we arrive at an average data rate (in MB/s) metric we can use to compare drives.
It's not quite a perfect system. The original trace captures the TRIM command in transit, but since the trace is played on a drive without a file system, TRIM wouldn't work even if it were sent during the trace replay (which, sadly, it isn't). Still, trace testing is a great way to capture periods of actual storage activity, a great companion to synthetic testing like Iometer.
Incompressible Data and Storage Bench v1.0
Also worth noting is the fact that our trace testing pushes incompressible data through the system's buffers to the drive getting benchmarked. So, when the trace replay plays back write activity, it's writing largely incompressible data. If we run our storage bench on a SandForce-based SSD, we can monitor the SMART attributes for a bit more insight.
| Mushkin Chronos Deluxe 120 GB SMART Attributes | RAW Value Increase |
|---|---|
| #242 Host Reads (in GB) | 84 GB |
| #241 Host Writes (in GB) | 142 GB |
| #233 Compressed NAND Writes (in GB) | 149 GB |
Host reads are greatly outstripped by host writes to be sure. That's all baked into the trace. But with SandForce's inline deduplication/compression, you'd expect that the amount of information written to flash would be less than the host writes (unless the data is mostly incompressible, of course). For every 1 GB the host asked to be written, Mushkin's drive is forced to write 1.05 GB.
If our trace replay was just writing easy-to-compress zeros out of the buffer, we'd see writes to NAND as a fraction of host writes. This puts the tested drives on a more equal footing, regardless of the controller's ability to compress data on the fly.
Average Data Rate
The Storage Bench trace generates more than 140 GB worth of writes during testing. Obviously, this tends to penalize drives smaller than 180 GB and reward those with more than 256 GB of capacity.

The 840 EVOs crush our average data rate chart, representing read and write performance combined. Despite the perception that this is a value-oriented product, we've already shown the 840 EVO's read speeds to be as good as any other SSD, while Samsung's Turbo Write feature augments write performance. The 1 TB and 500 GB 840 EVOs take second and fourth place, while the other two capacities turn in respectable performance as well.
- The Evolution Of Samsung As An SSD Giant
- The 840 EVO's Bag Of New Tricks
- Inside Samsung's 840 EVO
- Test Setup And Benchmarks
- Results: 128 KB Sequential Reads
- Results: 128 KB Sequential Writes
- Results: 4 KB Random Reads
- Results: 4 KB Random Writes
- Results: Tom's Hardware Storage Bench v1.0
- Results: Tom's Hardware Storage Bench, Continued
- Results: PCMark 7 And PCMark Vantage
- Results: Robocopy File Copy Performance
- Results: Power Consumption
- A Look At Samsung Magician's RAPID Feature
- Samsung's 840 Was Good; The 840 EVO Is Better
While the 1TB drive coming down to ~65c/GB is nice, seeing the 120 GB drives get near there would be nice. Especially since this is meant to be the value king.
I got them on a sale on Newegg for around $500 for both of them.
A 1TB would be cool if I find it on sale....
or maybe I should try out writing a letter to someone fat in some weird red costume...
Samsung: I need this drive with 2 SATA connectors so it makes possible to create a virtual RAID, and squeeze the drive performance.
Is clear that newer drivers are bottlenecked by the fastest SATA, so out of PCIE drives, virtual RAIDS are necessary.
Regards,
C. Ryan