RAID Scaling Charts, Part 3: 4-128 kB Stripes Compared

Conclusion

Although the stripe size of a RAID array sounds like a negligible detail, the difference between a small and a large stripe size may be larger than the performance impact of adding a hard drive to your array! Hence, spending some time selecting the right stripe size for your particular server application makes a lot of sense. For desktop users, it is already safe to say that you may very well stay at the default stripe size of 32 or 64 kB - the differences you may achieve by changing are usually not worth the effort.

Our dual core Opteron Compare Prices on Dual Core Opteron Processors test system with Areca’s ARC-1220 controller provides a great platform to cross-test RAID 5 and RAID 6 setups with three to eight hard drives. As we mentioned in the initial articles on RAID Scaling, we found that the controller has its limits at just below 500 MB/s, which provides sufficient bandwidth for most RAID 5 and all RAID 6 scenarios, but not necessarily for RAID 0. However, the controller is great for I/O testing, and we found significant I/O performance differences when running tests at all possible RAID stripe sizes (4-128 kB).

In the best case, I/O performance doubles on the way from 2 kB to 128 kB stripe size when command queues are involved. A RAID 0 setup with eight drives provides 350-800 I/O operations per second at 2 kB stripe sizes and at 1-64 pending commands, but scales between 250-1900 I/Os per second at 128 kB stripes. This is more than double the performance. In a RAID 5 array, the relative increase is the same, while the absolute results are 300-500 I/Os per second at 2 kB stripes and 220-1100 I/Os at 128 kB. These results are slightly lower in RAID 6.

Throughput also benefits from larger stripe sizes, although the performance differences are rather small when compared to the tremendous differences in I/O performance. Still, it is possible to get a 10% improve in data transfer performance just by increasing the stripe size. The disadvantage of large stripe sizes can be reduced I/O performance when no command queue is involved, and also a rather annoying waste of storage capacity and potential performance if the files you store are smaller than the stripe size.

Please also have a look at our initial RAID Scaling Chart articles:

Join our discussion on this article!

  • alanmeck
    I've found conflicting opinions re stripe size, so I did my own tests (though I don't have precision measuring tools like you guys). My raid 0 is for gaming only, so all I cared about was loading time. So I used a stopwatch to measure the difference in loading times on Left 4 Dead when using 64kb and 128kb stripe size. Results, by map:
    64kb 128kb
    No Mercy No Mercy
    Level 1: 9.15 Level 1: 9.08
    Level 2: 8.31 Level 2: 8.38
    Level 3: 8.24 Level 3: 8.31
    Level 4: 8.45 Level 4: 8.45
    Level 5: 6.56 Level 5: 6.63
    Death Toll Death Toll
    Level 1: 7.75 Level 1: 7.89
    Level 2: 7.19 Level 2: 7.26
    Level 3: 9.01 Level 3: 8.94
    Level 4: 9.36 Level 4: 9.36
    Level 5: 9.5 Level 5: 9.64
    Dead Air Dead Air
    Level 1: 7.68 Level 1: 7.47
    Level 2: 7.96 Level 2: 8.03
    Level 3: 9.08 Level 3: 8.87
    Level 4: 8.17 Level 4: 8.17
    Level 5: 6.98 Level 5: 6.84
    Blood Harvest Blood Harvest
    Level 1: 8.24 Level 1: 8.17
    Level 2: 7.33 Level 2: 7.33
    Level 3: 7.68 Level 3: 7.68
    Level 4: 8.45 Level 4: 8.31
    Level 5: 7.89 Level 5: 8.1

    I'm using software raid 0 on my GA-870A-UD3 mobo. The results for me were almost identical (128kb was faster by .07 seconds total). That being the case, I erred on the side of 128kb in order to reduce the potential for write amplification (I'm using 3x ocz vertex 2's). What's remarkable is that, despite using the stopwatch to measure times manually, the results were always either identical, or separated by intervals of .07 seconds. Weird, huh? Btw thanks to Tomshardware, you guys give a lot of helpful info.
    Reply
  • Does anyone want a slower system? why do we have to choose? why do we not just get the fastest option without having to do this? or is that to simple!
    Reply
  • I wish we could see what 256 does. Or even 1024 but that just sounds like a waste of space unless your doing Video or Music. Maybe gameing but RAM and bandwidth will always give you an edge if no one is hacked the game.
    Reply
  • Shomare
    I agree...can you please look into getting one of the new Areca 1882 controllers with 1+GB of mem on it and a dual core 800Ghz processor? We would like to see the larger stripe sizes, the larger processor, and the larger memory footprint's results! :)
    Reply
  • dermoth
    There is a misconception in this article. The point about capacity used: "For example, if you selected a 64 kB stripe size and you store a 2 kB text file, this file will occupy 64 kB.". This is totally wrong.

    The only incidence on size used is the FILESYSTEM block size as all files stored will be rounded up to the upper block (the last block being only partially filled). To the OS, the RAID array still looks like any other storage device and multiple filesystems block can be stored within a single stripe element.

    Also note that on RAID 5 & 6, the stripe size is the stripe element size X number of data disks, and writes are fastest when full stripes are written at once. If you write only 4k in a 384k stripe (ex 64k stripe element on a 8-disk RAID6) then all other sectors have to be read on disk before the controller can write out the 4k block & updated parity data.

    You will get better performance if you manage to match the filesystem block size to the FULL raid stripe size, and only in that case the statement above is true. Many filesystems offers other means of tuning the filesystem IO access patterns to match the underlying RAID geography without having to use excessively large block sizes, and most filesystems default to 4k block sizes which is also what most standalone rotational medias use internally since many years (even when they show 512k sector sizes for compatibility).
    Reply