Very poor write performance on RAID 5 Array
I currently have a raid 5 array consisting of five 500gb maxtor drives on a EVGA motherboard with the nforce 680i chipset. I have tried to read around and find some information on this problem without any definitive answer besides the fact write speed on this chipset is inherently slow. I also have an article on another computer that states several "sweet spot" values for block/stripe size ect, however to my knowledge that would require a complete wipe to reset. I wanted to see if I could instead troubleshoot without reformatting and possibly find the problem. I ran a utility called ATTO Disk Benchmark and the results are below. Please let me know if you require additional information.
RAID 5 will always be slow to write, especially files smaller than a single stripe. This is because of the way RAID 5 works - in your case, with 5 drives, each stripe covers 4 drives with data, and the 5th with a parity sector which can be used in conjunction with any 3 of the data portions to reconstruct the other one if a drive is lost. Because of the way the parity sector is calculated, the entire stripe needs to be known in order to write the parity sector.
If your write is smaller than one stripe, it has to write the new data, read the old data from the portions of the stripe that were left over, calculate a new parity sector, and then write the parity sector. So, it has to write, read, calculate, and then write. If the write is bigger than a stripe size though, it is just rewriting the entire stripe, so it doesn't have to do the extra read operation in there (that's why the write operations speed up at larger file sizes). Basically, slow small writes are in the nature of RAID 5, and can't really be helped much (though they could be sped up to some degree with a good RAID controller that has some cache and a processor to do the parity calculations without having to go to the chipset and CPU). Of course, a good hardware RAID controller is quite expensive...