In the first two parts of this series, we looked at setups running RAID 0, 1, 0+1, 5 and 6, with three to eight hard drives. In this final article, we analyze performance comparisons at different stripe sizes on RAID levels 0, 5 and 6.
But when you say that the controller is limited to 500MB/s... if we use the new samsung spinpoint f1 disk which can deliver 91MB/s (half more speed than the samsung HD321 disk), we saturate the controller with only 4disks?
so instead of using 8 drives in raid 0, we can achieve the same results with 8 spinpoint in raid 0+1?
and what's appends if we spread the disks between 2 controllers instead-of 1 controller?
specially with 2 opteron CPUs where each one communicate with 1 controller...
Sort of re-affirms what i already knew about stripe sizes, hence why i chose 64kb for my RAID-5. Im a little concerned about the limitation of the Areca at 500MB/s as my card is an Adaptec and i would think would limit out at less than that
I believe this statement; "The stripe size also defines the amount of storage capacity that will at least be occupied on a RAID partition when you write a file." is incorrect with regard to Windows. The Cluster size sets the file size, not the Stripe size. My two drive RAID 0 set up occupies the same space there as it does when I clone it off to a single drive. Same for all the Window servers at work where we set the data drive cluster size at 64K and the Stripe size under RAID 5 is also 64K. We don’t see sizes vary due to Stripe, just performance. As I understand it, the Stripe size is mostly a logical block of clusters controlled by the XOR parity engine. The drive still reads and writes clusters of data but the Stripe size controls the fetch size for the XOR vertical block of data pulled. It might be interesting to see what happens when the cluster size is varied within various stripe sizes. Most controller performance is limited by the XOR’s ability to maintain parity lock on a certain size of data (4K – 256K) for example. There is more overhead in cluster size management/decoding than the stripe I believe. I’ve not changed my desktop C drive cluster size because that would waste a lot of space and performance but on servers the larger cluster increases performance. Just a thought.
The article is falsely stating that a smaller stripe size would conserve disk space if using many small files. This is a common misconception, and Tom's Hardware should not disseminate nonsense like that.
Since the RAID stripe size is in no way known to the OS (it's entirely transparent to it), there is no such thing as a 2kB file occupying an entire 64kB stripe. As it's correctly pointed out in prequels to this article, RAID essentially distributes BITS between several disks, so it will fill up the stripes entirely, with any file sizes it gets from the OS.
This statement would only be true if someone would change the NTFS cluster size to a larger value than 4kB.
The rule of thumb for stripe size in 2007 is: the bigger the better.