Page 1:A Lot More SandForce
Page 2:How Can Seven SandForce-Based SSDs Differ?
Page 3:Test Setup And Firmware Notes
Page 4:What's Important: Steady-State Performance
Page 5:Benchmark Results: Storage Bench v1.0 And Real-World Analysis
Page 6:Benchmark Results: 4 KB Random Performance (Throughput)
Page 7:Benchmark Results: 4 KB Random Performance (Response Time)
Page 8:Benchmark Results: 128 KB Sequential Performance
Page 9:Sequential Performance Versus Transfer Size
Page 10:PCMark 7: Storage Suite
Page 11:Final Words
Benchmark Results: 128 KB Sequential Performance
SSD manufacturers often want to stress random performance because it's a clear case where they decimate conventional hard drives. Sequential performance is a little different, but still represents an important aspect of performance to examine.
But how pervasive is sequential performance for the average user? Take a look at the graph below; it shows the distribution of all the seek distances from one of our traces.
The first thing you'll notice is that there's a preponderance of activity zero sectors away, which means that our trace is made mostly of back-to-back requests, or sequential I/O. If the trace was 100% random, none of the accesses would be zero sectors away. But that's the opposite of what we see here. Why?
Much of what you read and write to your storage device on a day-to-day basis is random in nature. But over the course of days and weeks, the read-modify-erase-write cycle has a significant effect on the sequential and random I/O balance. When you write random data, the block containing that data is going to accumulate invalid pages as you delete information (blocks are made up of multiple pages), and if you have a block with a lot of pages that need to be moved, the SSD controller rewrites them sequentially. So, when you read back that information, you do so sequentially, even though it was originally written randomly. Eventually, over time, you see random reads turn into sequential reads. This isn't a wholesale transition; it largely depends on firmware and the SSD controller's architecture.
Intel's 250 GB SSD 510 leads the pack in reads, while the 240 GB Vertex 3 tops the chart in writes. Crucial's m4 drives perform admirably, but the 512 GB m4 falls 27% behind the 240 GB Vertex 3.
As we look at the 120 GB SF-2200-based drives, there's a clear difference between SSDs armed with synchronous and asynchronous flash. Drives with asynchronous memory (Solid 3, Agility 3, and Force 3) fall behind the pricier synchronous-equipped SSDs (Vertex 3, S511, Wildfire, and Chronos Deluxe) by at least 65%. In sequential writes, the delta is slightly smaller, ranging from 15% to 35%.
Don't be completely floored by those lower performance results. These aren't new-in-box numbers; they represent steady-state performance, which changes the behavior of the SSD. This is a particularly bad scenario because the testing happens after each drive is filled with incompressible data, but before idle garbage collection is able to help recover performance.
As most of you know, SandForce's architecture is most efficient when it's operating on compressible data. In the real world, that's actually a pretty realistic expectation of what it'd be working with most often, making these results, again, a worst-case situation for the SF-2200-based drives.
- A Lot More SandForce
- How Can Seven SandForce-Based SSDs Differ?
- Test Setup And Firmware Notes
- What's Important: Steady-State Performance
- Benchmark Results: Storage Bench v1.0 And Real-World Analysis
- Benchmark Results: 4 KB Random Performance (Throughput)
- Benchmark Results: 4 KB Random Performance (Response Time)
- Benchmark Results: 128 KB Sequential Performance
- Sequential Performance Versus Transfer Size
- PCMark 7: Storage Suite
- Final Words