Sign in with
Sign up | Sign in

Results: Random Performance

SanDisk Extreme II SSD Review: Striking At The Heavy-Hitters
By

Iometer is still our synthetic metric of choice for testing 4 KB random performance. Technically, "random" translates to a consecutive access that occurs more than one sector away. On a mechanical hard disk, this can lead to significant latencies that hammer performance. Spinning media simply handles sequential accesses much better than random ones, since the heads don't have to be physically repositioned. With SSDs, the random/sequential access distinction is much less relevant. Data can be put wherever the controller wants it, so the idea that the operating system sees one piece of information next to another is mostly just an illusion.

4 KB Random Read

Plextor's M5 Pro and the SanDisk drives offer similar performance. Throughout the capacity range, the Extreme IIs are competitive. The 120 GB model isn't as strong, but it's almost exactly as fast as the 240 GB Seagate 600.

The 240 GB and 480 GB Extreme IIs don't quite hit 100,000 IOPS, but there's no shame in 94,000 and 91,000 IOPS, either.

4 KB Random Write

And then things seem to go pear-shaped. A glance at the above chart makes it clear that SanDisk's drives aren't living up to their specifications. Shouldn't they be hitting 80,000 IOPS or so?

The explanation is relatively simple. We test with random data over a 16 GB LBA space. Industry-wide, most consumer-oriented SSD tests are limited to 8 GB. Now, this doesn't matter most of the time. Hard drives are especially sensitive to LBA active ranges, since spinning platters and floating heads need more time to move when the data you request is physically farther away. Solid-state storage obviously isn't subject to the same limitation, though some SSDs are more sensitive to changes in LBA ranges than others. The difference just usually isn't so profound.

Using the 240 GB Extreme II, we can demonstrate this idiosyncrasy. Starting with 1 GB of sectors and graduating to the entire LBA range, the drop in performance is substantial by the time we get to 16 GB. There are technical reasons why this might happen to a lesser degree with other SSDs, but it looks like the implementation of nCache can result in slower random writes at high queue depths over a large number of LBAs. It's possible that a tradeoff exists between writes that can be cached and writes that exceed the cache's capacity, hurting performance when the cache is full and improving speed when nCache can effectively handle smaller random writes.

Is this a problem? In a word, no.

Typically, random workloads bombarding the entire drive are considered enterprise-oriented. Consumer usage just doesn't match that profile. Random writes are more typically limited to smaller areas, and the amount of writing is exceptionally light. The fact of the matter is that SanDisk's Extreme II was designed for desktop workloads. Wringing the last few drops of performance from an interface-limited SSD means taking steps to improve one area at the expense of others.

The trade-off seems fair. The Extreme II is less useful for a selection of some enterprise applications, but is better as a boot and desktop application drive. We can live with that.

Besides, the Extreme IIs aren't even as bad off as they might appear. Consider the above 4 KB write saturation test at a queue depth of 32. Sure, the drives start well off of their highs. But after the SSDs are filled and garbage collection is in full swing, SanDisk's solutions aren't any worse than competing models. In some cases, they're even better. Performance levels off and ranges from 7,000 to 10,000 IOPS, depending on size. If nothing else, that's competitive. Regardless, there just aren't any reasons why you'd ever write like this on a gaming rig.

React To This Article