RAID 5 array *extremely* slow and I can't figure out why

First, the hardware:
Gigabyte GA-X58A-UD3R Motherboard
Intel i7 980x
5x 2TB Seagate Barracuda XT Harddrives (AHCI mode)
12GB G.Skill RAM
OS: Win7 x64 Pro

I have these 5 brand new disks in a GPT Raid 5 array (yeah, I know the potential consequences of having such a large array in raid 5, but I like this option best) just to use as a single large data drive.

However, the performance so far has been absurd - The array has been initializing 24 hours a day for a little over a week, was at 98% complete, and as I was trying some new software, the system crashed and put it back at 0%.

Anyways, file transfers to the array while it has been initializing average ~9MB/s, when each disk is supposed to be ~138MB/s. I started looking into it, and figured it was because IRST showed the Write-back cache as Disabled and wouldn't let me select the Enable option (grayed-out). However, in the device manager, the volume showed as enabled. So I disabled it in device manager and restarted, then enabled it again in device manager and restarted. So now IRST actually shows Write-back cache as Enabled, but there's has been no improvement.

So I'm really hoping someone can explain my problem:
- Is it slow only because it's still initializing, or will should I be expecting similarly slow speeds once it's done?
- Is it due to poor raid performance of the ICH10R?
- Is it because these drives are SATA 6.0Gb/s but are connected to SATA 3.0Gb/s connections on the motherboard?
- Or is it still related to the write-back cache? (I'm still wondering because this post sounded similar to my problem, in that he was getting 5MB/s until enabling write-back which raised it to 80MB/s)

Thanks for any advice. If no one knows, I'll just break the raid array and figure something else out.

3 answers Last reply
More about raid array extremely slow figure
  1. Quote:
    The benchmark is OK for an array which is still initializing.

    Using the array which is not yet initialized slows down both initialization and whatever you use it for. It is best to leave the array idling till it completes the initialization. Last time we rebuilt a 6x2TB RAID5, it took about, 24 to 48 hours (cannot recall more precisely) with no user activity on the array.

    Thanks for the info - it looks like the initialization was the issue. After 91 hours, the initialization finished, and now I'm getting average write speeds at ~60 MB/s, and read speeds reaching an astounding 500MB/s. Even while writing to the array, the read speed only drops to ~400MB/s or so..

    I absolutely wasn't expecting SSD-grade read speeds without a dedicated raid card, so I'm pretty happy with this now.

  2. You bought an extra drive or two for *just in case* right?
  3. popatim said:
    You bought an extra drive or two for *just in case* right?

    Haha well I have an external 2TB as well that I'll be backing up critical files to, but since this is just a data drive, I think I'll be able to tolerate 1 drive failure until I a new one ships. If the ICH10R supported RAID 6, I'd prefer using that, but my options are limited and I don't feel like dropping $650 on a dedicated RAID card just yet.
Ask a new question

Read More

NAS / RAID Storage