Skip to main content

SSD Deathmatch: Crucial's M500 Vs. Samsung's 840 EVO

Results: Tom's Storage Bench v1.0

Storage Bench v1.0 (Background Info)

Our Storage Bench incorporates all of the I/O from a trace recorded over two weeks. The process of replaying this sequence to capture performance gives us a bunch of numbers that aren't really intuitive at first glance. Most idle time gets expunged, leaving only the time that each benchmarked drive was actually busy working on host commands. So, by taking the ratio of that busy time and the the amount of data exchanged during the trace, we arrive at an average data rate (in MB/s) metric we can use to compare drives.

It's not quite a perfect system. The original trace captures the TRIM command in transit, but since the trace is played on a drive without a file system, TRIM wouldn't work even if it were sent during the trace replay (which, sadly, it isn't). Still, trace testing is a great way to capture periods of actual storage activity, a great companion to synthetic testing like Iometer.

Incompressible Data and Storage Bench v1.0

Also worth noting is the fact that our trace testing pushes incompressible data through the system's buffers to the drive getting benchmarked. So, when the trace replay plays back write activity, it's writing largely incompressible data. If we run our storage bench on a SandForce-based SSD, we can monitor the SMART attributes for a bit more insight.

Mushkin Chronos Deluxe 120 GBSMART AttributesRAW Value Increase
#242 Host Reads (in GB)84 GB
#241 Host Writes (in GB)142 GB
#233 Compressed NAND Writes (in GB)149 GB

Host reads are greatly outstripped by host writes to be sure. That's all baked into the trace. But with SandForce's inline deduplication/compression, you'd expect that the amount of information written to flash would be less than the host writes (unless the data is mostly incompressible, of course). For every 1 GB the host asked to be written, Mushkin's drive is forced to write 1.05 GB.

If our trace replay was just writing easy-to-compress zeros out of the buffer, we'd see writes to NAND as a fraction of host writes. This puts the tested drives on a more equal footing, regardless of the controller's ability to compress data on the fly.

Average Data Rate

The Storage Bench trace generates more than 140 GB worth of writes during testing. Obviously, this tends to penalize drives smaller than 180 GB and reward those with more than 256 GB of capacity.

The M500s handle their business in this metric, but hardly turn in spectacular performances. The 120 GB M500 gets dinged right out of the gate because of its capacity, though not as severely as, say, the 30 GB Intel SSD 525. It's even slower than the three-bit-per-cell-based Samsung 840 120 GB.

All three larger M500s mix it up in the middle of the pack. For instance, the 240 GB drive reports back that it's 2.55 MB/s slower than the previous-gen m4. The 960 GB model is next-highest, just behind Plextor's 256 GB M5 Pro. We really want to know why the M500s fall in line the way they do, though.

Busy time accumulates when the SSD performs a task initiated by the host. So, when the operating system asks a drive to read or write, measured busy time increases. Take the total elapsed time and the amount of data read/written by the trace, and you get busy time in a far easy to understand MB/s figure. Unfortunately, busy time and the MB/s number generated with it aren't really good at measuring higher queue depth performance.

The corner case testing tells us that the M500s really stand apart from each other as queue depth increases. However, all of these SSDs are basically the same in the background I/O generated by Windows, your Web browser, or most other mainstream applications. It's only when we're presented with lots of reads and writes in a short time that the mid-range and high-end drives stand apart from each other. To test that, we need another metric.