As we have pointed out in the past, and as we're sure you would have concluded logically on your own, an enterprise storage workload is quite different from desktop or client workloads. The differences between them affect how we test, analyze, and evaluate enterprise-oriented devices. The slide below, from last year’s Flash Memory Summit, gives a great overview of the differences.
SSDs are not easy to evaluate. Unlike traditional rotating disks, solid-state drives are affected by many factors that are difficult to control.
The Storage Networking Industry Association (SNIA), a working group made up of SSD, flash, and controller vendors, has produced a testing procedure that attempts to control as many of the variables inherent to SSDs as possible. SNIA’s Solid State Storage Performance Test Specification (SSS PTS) is a great resource for enterprise SSD testing. The procedure does not define what tests should be run, but rather the way in which they are run. This workflow is broken down into four parts:
- Purge: Purging puts the drive at a known starting point. For SSDs, this normally means Secure Erase.
- Workload-Independent Preconditioning: A prescribed workload that is unrelated to the test workload.
- Workload-Based Preconditioning: The actual test workload (4 KB random, 128 KB sequential, and so on), which pushes the drive towards a steady state.
- Steady State: The point at which the drive’s performance is no longer changing for the variable being tracked.
These steps are critical when testing SSDs. It is incredibly easy to not fully condition the drive and still see fresh-out-of-box behavior and think it is steady-state. These steps are also important when going between random and sequential writes.
The graph below demonstrates the rationale behind SNIA's guidelines on Intel's SSD 910. We first performed a Secure Erase (Purge), followed by five full disk writes of random 4 KB data (Workload-Independent Preconditioning). Then, we wrote the full capacity of the disk four times in a row with 8 MB sequential writes (Workload-Based Preconditioning). It wasn’t until the fourth full disk write that we achieved Steady State.
For all performance tests in this review, the SSS PTS was followed to ensure accurate and repeatable results.
Finally, the SSS PTS mandates that all data patterns be random. This is an attempt to normalize results for SSDs that optimize performance for compressible data. In general, the compressibility of data is very case-dependent. So, to represent worst-case scenarios, random data is used when applicable in the performance tests. It should be noted that Intel's SSD 910 does not perform any data compression, and the results for compressible data are identical.
Intel sent us an 800 GB sample of its SSD 910 for evaluation. We ran tests in both Maximum Performance mode and its Default mode. To simulate the performance of the 400 GB model, we only configured two of the four NAND modules, per Intel’s instructions. The evaluation unit did not come with a full-height PCIe bracket, so testing was performed without one installed.
For comparison purposes, we're putting Intel's SSD 910 up against OCZ's Z-Drive R4 RM88 1.6 TB. Is this a fair fight? No, it isn’t. The R4 sports two times the capacity, twice as many controllers, it requires a ¾-height, full-length PCIe slot, and sells for somewhere around $7/GB. Since the R4 uses SandForce-based controllers, though, we wanted to see how much of a fight the SSD 910 can put up, especially since most of our testing is being performed with incompressible data, a known problem for SandForce's technology.