Sign in with
Sign up | Sign in
Your question

RAID 5 setup really slow

Last response: in Storage
Share
November 9, 2009 3:47:23 AM

So I recently built a new box and decided to go with RAID. I used the RAID setup on the motherboard (ASUS P6T) vs going with a a dedicated RAID card because of cost. Right now I have it in a RAID 5 with three 1TB drives (2 Samsung Spinpoint F1s, and a Seagate Barracuda) and I'm getting really slow speeds.

When I test with bst5 I get like 21.6 MB/s. Is this normal? Are RAID 5 setups generally this slow?

More about : raid setup slow

a b V Motherboard
a c 127 G Storage
November 9, 2009 7:20:56 AM

You're using ICH10R onboard RAID, you should have mentioned that in your post.

Post some benchmarks to demonstrate your issue. HDTune Pro and normal surface + random access benchmark would do.
November 9, 2009 4:23:20 PM

Well first you're using two different types of hard drives with different firmware etc, and second you're using onboard raid for raid 5 which is probably terribly slow as you're seeing.

If you have the ability use raid 10.
Related resources
Can't find your answer ? Ask !
a c 297 V Motherboard
a b G Storage
November 9, 2009 5:07:48 PM

The problem is the Seagate HDD.
November 9, 2009 6:19:50 PM

Cool thanks for the responses! So if I put the two Samsung drives in RAID 0 would that be considerably faster? Also, I didn't know RAID 10 was an option with three drives?

Here are the benchmarks:
a b V Motherboard
a c 127 G Storage
November 9, 2009 7:26:02 PM

Thanks for the benches, i'm not convinced yet you have a major performance problem.

Could you do the "File Benchmark" too? This will test sequential read/write performance on the filesystem; this is actual performance you will get when application read or write to your filesystem in sequential order - sequantial meaning its handled like reading/writing one big file.

The reads should exceed 100MB/s, the writes should exceed 30MB/s, and 80MB/s+ if you enable 'write caching' option in Intel's ICHxR drivers. Be aware however this option can seriously corrupt your array in case of a crash or power failure. But for the purpose of benchmarking it may be nice to compare both the "File Benchmark" results with both this option turned off (default) and on.

Regards, sub
November 11, 2009 5:46:16 AM

Awesome, thank you for your help.

I discovered write caching was already turned on in the driver. I also have the option to turn off windows write-cache buffer flushing on the device. That was not checked. I did the file benchmark with all the settings and here are my results:

With no write cache:


With write cache on:


With write cache on and windows not buffer flushing the device:


It looks like the reads are exceeding 100, but without write cache my writing is really slow. Would you say that's accurate?
a b V Motherboard
a c 127 G Storage
November 11, 2009 8:14:16 PM

Yes, write-through performance on parity RAID is slow for any software/hardware RAID engine not just the Intel one. This is because one write request is actually a multi-phase process; first the engine needs to issue several read requests, then a XOR calculation, then writing an entire stripe block. In other words; writing to a RAID5 or RAID6 is always going to be slow EXCEPT when you can buffer the writes.

This buffering will make sure the engine writes to exactly some 'magical' size; for example a RAID5 of 4 disks with 128KiB stripesize has a magical size of (4-1) * 128KiB = 384KiB; called the "full stripe block". If you write requests with exactly this size; it'll be very fast. The 'write caching' option in the Intel driver is doing just that; it buffers the writes so you write to RAM first, then it splits the writes into chunks of this magical size so your RAID5 write speeds are actually quite decent. It doesn't have to read stuff before it can write; as was the case with write-through mode. The buffering mode is called 'write-back' mode. You're not directly writing to disk but to a buffer first.

As you can see the read speeds are very decent. Though in your last benchmark you have some 'contamination' as during a second benchmark run it is still processing data from the previous one. You can cope with this by increasing the 'delay' to a high number so the results would be more consistent. Without this the results would heavily fluctuate which gives misleading results. One time its faster one time its slower just because its still busy processing data from the previous benchmark runs (it tests 0.5KiB - 8KiB blocksizes both read and write so total of 30 benchmark runs).
!