Sign in with
Sign up | Sign in
Your question

RAID0 benchmarks showing random I/O performance

Last response: in Storage
Share
April 9, 2010 2:31:33 AM

As often people say RAID0 improves only throughput, i decided to make a nice benchmarking article to show the random I/O performance when striping two disks. It shows the relation between stripesize and queue depth.

It also shows a healthy increase in IOps when properly configured. It seems that throughput actually prefers smaller stripesizes, while random I/O prefers larger stripesizes. Opposite than what many people think.

It should be noted however that these tests were conducted on FreeBSD, not on Windows. Often, windows onboard RAID drivers employ low-level optimizations such as always reading the whole stripe block even only a part was requested. This would lead to lower random I/O performance on higher stripe sizes. On FreeBSD however, everything works as expected. In some cases the performance increase is close to the theoretical 100% increase. Very remarkable.

Alright let's show you the URL with all the nice graphs etc:

http://submesa.com/data/raid/geom_stripe

I've added my own comments in the article, but i'd love to discuss the interpretation of the results. It appears to confirm my theories that a lower stripesize would hurt random I/O performance instead of help. Interestingly, a stripesize of 1 megabyte gave me higher results than 128KiB stripe. I would like to know why. :) 
April 9, 2010 1:36:10 PM

Thanks for the article.
October 6, 2010 9:20:05 PM

Nice article, actually confirmed my thoughts: performance in real life depends on stripe size and what you actually do on your system. About the 1Mb stripe-size out-performing 128Kb size is probably the fact that 1Mb is right amount of data for a hard drive to service while the next I/O request is coming up. Since a medium performance hdd (not ssd) has between 50-90Mb/s on random reads, 1Mb/50 - 1Mb/90 translates to a range of 11ms - 20 ms which is the time taken by the drive to seek and deliver. Does that make sense?
October 6, 2010 11:04:37 PM

Even with 128KiB and higher stripesize, it can still occur that a 128KiB I/O request covers multiple disks (multiple stripes), when the request does not begin at a stripe. The tests i've done is TRUE random I/O, aligned on 512 bytes. As such, higher stripesizes would still yield higher random IOps.

A HDD has about 0.5MB/s of true 4K random reads. For 512 byte random reads, it is 50-100 IOps for 5400/7200rpm disks; which is less than 50 kilobytes per second. The Intel X25-M is capable of 50.000 random read IOps (512-byte). This scenario would slow the highest performance difference between HDD and SSD.

Random writes are a tad higher, due to buffering. 1.5MB/s of normal 4K random write performance for HDDs. SSDs do up to 70MB/s and new Intel G3 would do up to 150MB/s of random write 4K.
!