Page 1:A Lot More SandForce
Page 2:How Can Seven SandForce-Based SSDs Differ?
Page 3:Test Setup And Firmware Notes
Page 4:What's Important: Steady-State Performance
Page 5:Benchmark Results: Storage Bench v1.0 And Real-World Analysis
Page 6:Benchmark Results: 4 KB Random Performance (Throughput)
Page 7:Benchmark Results: 4 KB Random Performance (Response Time)
Page 8:Benchmark Results: 128 KB Sequential Performance
Page 9:Sequential Performance Versus Transfer Size
Page 10:PCMark 7: Storage Suite
Page 11:Final Words
Benchmark Results: 4 KB Random Performance (Throughput)
Our Storage Bench v1.0 mixes random and sequential operations. However, it's still important to isolate 4 KB random performance because that's such a large portion of what you're doing on a day-to-day basis. Right after Storage Bench v1.0, we subject the drives to Iometer to test random 4 KB performance. But why specifically 4 KB?
When you open Firefox, browse multiple Web pages, and write a few documents, you're mostly performing small random read and write operations. The chart above comes from analyzing Storage Bench v1.0, but it epitomizes what you'll see when you analyze any trace from a desktop computer. Notice that close to 70% of all of our accesses are eight sectors in size (512 bytes per sector, thus 4 KB).
We're restricting Iometer to test an LBA space of 16 GB because a fresh install of a 64-bit version of Windows 7 takes up nearly that amount of space. In a way, this examines the performance that you would see from accessing various scattered file dependencies, caches, and temporary files.
If you're a typical PC user, it's important to examine performance at a queue depth of one, because this is where the majority of your accesses are going to fall on a machine that isn't being hammered by I/O commands.
Before we get to the numbers, note that we're presenting random performance in MB/s instead of IOPS. There is a direct relationship between these two units, as average transfer size * IOPS = MB/s. Most workloads tend to be a mixture of different transfer sizes, which is why the networking ninjas in IT prefer IOPS. It reflects the number of transactions that occur per second. Since we're only testing with a single transfer size, it's more relevant to look at MB/s (it's also more intuitive for "the rest of us"). If you want to convert back to IOPS, just take the MB/s figure and divide by .004096 MB (remember your units) for the 4 KB transfer size.
At a queue depth of one, the 512 GB and 256 GB m4s reign king in random reads; both push past 200 MB/s. The closest contender is OCZ's 120 GB Vertex 3, but it falls behind by 25% with a random read rate of 153 MB/s. The Agility 3 follows closely at 138 MB/s, but all of the other drives fall behind by a noticeable margin. The S511, Force 3, Solid 3, Wildfire, and Chronos Deluxe all run about 50% slower, with speeds hovering around 90 MB/s.
In random writes, the story changes. This time Crucial's 256 GB and 512 GB m4s drop behind the OCZ 120 GB Vertex 3 and Agility 3, albeit by a smaller margin than the random read test (the 120 GB Agility 3 only runs 13% faster than the 256 GB m4). The Force 3, Wildfire, Chronos Deluxe, and S511 perform much better here as well with speeds around 210 MB/s.
Notice how far back the Solid 3 slides, though. With a random write speed of 86 MB/s, the 120 GB Solid 3 is outperformed even by Crucial's 64 GB m4. The explanation relates back to firmware. The Solid 3 is a more budget-oriented version of the Agility 3. While both SSDs use the same 25 nm ONFi 1.0-based flash, the Solid 3's firmware is more performance-restricted. The company eventually plans to use less expensive NAND configurations to help drop cost (and then price), while maintaining the same lower-rated performance spec.
Perhaps you're also wondering why the 240 GB Vertex 3 runs slower than the 120 GB version at a queue depth of one. When you're not hammering the drive with higher queue depths, accesses are limited by the speed at which cache hits and misses occur in the metadata lookup table. Thus, a large-capacity SSD with a larger lookup table experiences lower throughput, as it must search through more metadata.
As we start looking at queue depths higher than four, we finally see the 240 GB Vertex 3 overtake its 120 GB counterpart because performance is no longer bound by the same lookup table bottleneck. When you have outstanding I/Os stacking up, there are enough accesses that the SSD controller can fully saturate the table with multiple queries. It's a clear lead that we maintain in random writes, where the 240 GB Vertex 3 finishes in first place.
However, in random reads, that honor goes to the 256 GB and 512 GB m4s.
- A Lot More SandForce
- How Can Seven SandForce-Based SSDs Differ?
- Test Setup And Firmware Notes
- What's Important: Steady-State Performance
- Benchmark Results: Storage Bench v1.0 And Real-World Analysis
- Benchmark Results: 4 KB Random Performance (Throughput)
- Benchmark Results: 4 KB Random Performance (Response Time)
- Benchmark Results: 128 KB Sequential Performance
- Sequential Performance Versus Transfer Size
- PCMark 7: Storage Suite
- Final Words