So I recently built a new box and decided to go with RAID. I used the RAID setup on the motherboard (ASUS P6T) vs going with a a dedicated RAID card because of cost. Right now I have it in a RAID 5 with three 1TB drives (2 Samsung Spinpoint F1s, and a Seagate Barracuda) and I'm getting really slow speeds.
When I test with bst5 I get like 21.6 MB/s. Is this normal? Are RAID 5 setups generally this slow?
Thanks for the benches, i'm not convinced yet you have a major performance problem.
Could you do the "File Benchmark" too? This will test sequential read/write performance on the filesystem; this is actual performance you will get when application read or write to your filesystem in sequential order - sequantial meaning its handled like reading/writing one big file.
The reads should exceed 100MB/s, the writes should exceed 30MB/s, and 80MB/s+ if you enable 'write caching' option in Intel's ICHxR drivers. Be aware however this option can seriously corrupt your array in case of a crash or power failure. But for the purpose of benchmarking it may be nice to compare both the "File Benchmark" results with both this option turned off (default) and on.
I discovered write caching was already turned on in the driver. I also have the option to turn off windows write-cache buffer flushing on the device. That was not checked. I did the file benchmark with all the settings and here are my results:
With no write cache:
With write cache on:
With write cache on and windows not buffer flushing the device:
It looks like the reads are exceeding 100, but without write cache my writing is really slow. Would you say that's accurate?
Yes, write-through performance on parity RAID is slow for any software/hardware RAID engine not just the Intel one. This is because one write request is actually a multi-phase process; first the engine needs to issue several read requests, then a XOR calculation, then writing an entire stripe block. In other words; writing to a RAID5 or RAID6 is always going to be slow EXCEPT when you can buffer the writes.
This buffering will make sure the engine writes to exactly some 'magical' size; for example a RAID5 of 4 disks with 128KiB stripesize has a magical size of (4-1) * 128KiB = 384KiB; called the "full stripe block". If you write requests with exactly this size; it'll be very fast. The 'write caching' option in the Intel driver is doing just that; it buffers the writes so you write to RAM first, then it splits the writes into chunks of this magical size so your RAID5 write speeds are actually quite decent. It doesn't have to read stuff before it can write; as was the case with write-through mode. The buffering mode is called 'write-back' mode. You're not directly writing to disk but to a buffer first.
As you can see the read speeds are very decent. Though in your last benchmark you have some 'contamination' as during a second benchmark run it is still processing data from the previous one. You can cope with this by increasing the 'delay' to a high number so the results would be more consistent. Without this the results would heavily fluctuate which gives misleading results. One time its faster one time its slower just because its still busy processing data from the previous benchmark runs (it tests 0.5KiB - 8KiB blocksizes both read and write so total of 30 benchmark runs).