As often people say RAID0 improves only throughput, i decided to make a nice benchmarking article to show the random I/O performance when striping two disks. It shows the relation between stripesize and queue depth.
It also shows a healthy increase in IOps when properly configured. It seems that throughput actually prefers smaller stripesizes, while random I/O prefers larger stripesizes. Opposite than what many people think.
It should be noted however that these tests were conducted on FreeBSD, not on Windows. Often, windows onboard RAID drivers employ low-level optimizations such as always reading the whole stripe block even only a part was requested. This would lead to lower random I/O performance on higher stripe sizes. On FreeBSD however, everything works as expected. In some cases the performance increase is close to the theoretical 100% increase. Very remarkable.
Alright let's show you the URL with all the nice graphs etc:
http://submesa.com/data/raid/geom_stripe
I've added my own comments in the article, but i'd love to discuss the interpretation of the results. It appears to confirm my theories that a lower stripesize would hurt random I/O performance instead of help. Interestingly, a stripesize of 1 megabyte gave me higher results than 128KiB stripe. I would like to know why.
It also shows a healthy increase in IOps when properly configured. It seems that throughput actually prefers smaller stripesizes, while random I/O prefers larger stripesizes. Opposite than what many people think.
It should be noted however that these tests were conducted on FreeBSD, not on Windows. Often, windows onboard RAID drivers employ low-level optimizations such as always reading the whole stripe block even only a part was requested. This would lead to lower random I/O performance on higher stripe sizes. On FreeBSD however, everything works as expected. In some cases the performance increase is close to the theoretical 100% increase. Very remarkable.
Alright let's show you the URL with all the nice graphs etc:
http://submesa.com/data/raid/geom_stripe
I've added my own comments in the article, but i'd love to discuss the interpretation of the results. It appears to confirm my theories that a lower stripesize would hurt random I/O performance instead of help. Interestingly, a stripesize of 1 megabyte gave me higher results than 128KiB stripe. I would like to know why.