Sign in with
Sign up | Sign in

More Background On Our Benchmarks

OCZ Octane 512 GB SSD Review: Meet Indilinx's Everest Controller

4 KB Random

Our Storage Bench v1.0 mixes random and sequential operations. However, it's still important to isolate 4 KB random performance because that's such a large portion of what you're doing on a day-to-day basis. Right after Storage Bench v1.0, we subject the drives to Iometer to test random 4 KB performance. But why specifically 4 KB?

When you open Firefox, browse multiple Web pages, and write a few documents, you're mostly performing small random read and write operations. The chart above comes from analyzing Storage Bench v1.0, but it epitomizes what you'll see when you analyze any trace from a desktop computer. Notice that close to 70% of all of our accesses are eight sectors in size (512 bytes per sector, thus 4 KB).

We're restricting Iometer to test an LBA space of 16 GB because a fresh install of a 64-bit version of Windows 7 takes up nearly that amount of space. In a way, this examines the performance that you would see from accessing various scattered file dependencies, caches, and temporary files.

If you're a typical PC user, it's important to examine performance at a queue depth of one, because this is where the majority of your accesses are going to fall on a machine that isn't being hammered by I/O commands.

Before we get to the numbers, note that we're presenting random performance in MB/s instead of IOPS. There is a direct relationship between these two units, as average transfer size * IOPS = MB/s. Most workloads tend to be a mixture of different transfer sizes, which is why the networking ninjas in IT prefer IOPS. It reflects the number of transactions that occur per second. Since we're only testing with a single transfer size, it's more relevant to look at MB/s (it's also more intuitive for "the rest of us"). If you want to convert back to IOPS, just take the MB/s figure and divide by .004096 MB (remember your units) for the 4 KB transfer size.

128 KB Sequential

SSD manufacturers often want to stress random performance because it's a clear case where they decimate conventional hard drives. Sequential performance is a little different, but still represents an important aspect of performance to examine.

But how pervasive is sequential performance for the average user? Take a look at the graph below; it shows the distribution of all the seek distances from one of our traces.

The first thing you'll notice is that there's a preponderance of activity zero sectors away, which means that our trace is made mostly of back-to-back requests, or sequential I/O. If the trace was 100% random, none of the accesses would be zero sectors away.

More and more of your data is becoming sequential in natural, especially if you're watching movies and listening to music. Consider that most webpages contain less than 1 MB worth of data and most emails less than 16 KB. Office productivity isn't particularly disk intensive, but that workload pales in comparison to multimedia, as a two minute movie transfer can easily exceed 200 MB.

Of course, this doesn't even touch the subject of gaming. We've traced six games now and except in the case of MMORPGs, we've found gameplay related data to be mostly sequential. First person shooter games like Crysis 2 are particularly data heavy, as only 20 minutes of gameplay involves reading and writing over 1 GB of data.

React To This Article