ICH10R Raid 5 'failed' instead of 'degraded'

One of the drives on my 3x1TB RAID5 array went down a while ago. I just got the replacement and it seems to be fine, however I can't get the RAID controller to do anything useful with it.

The volume is marked as 'failed' instead of 'degraded'. The manual for Intel Matrix Storage Manager says it should only be marked as 'failed' when more than one member drive has failed, however the software clearly indicates that there is only one missing drive.

Intel Matrix Storage Controller doesn't seem to be giving me any option to include the drive (which is shown under 'Non-RAID Disks') in the array, nor it is automatically rebuilding the array or anything like that.

Is there any way to force it to return to 'degraded' and attempt a rebuild?

Thanks in advance.
5 answers Last reply
More about ich10r raid failed degraded
  1. The Intel Software Raid stuff is crap !!!
    I'm struggling with Asus P577D and ICH10R Raid 10 for a while now. The array fails after awhile ( ~14days ) and
    it's always a different drive. Most time I'm lucky that it's only degraded but sometimes it crashes that the array
    wasn't bootable anymore.
    The solution for me was to delete the array and define it again with the exact same parameter as before.
    I think then the array definition is rewritten to the disk which was damaged I guess.
  2. Use Intel Storage Matrix 8.8

    I have 4 servers with an ICH10R, with ANY other software version, higher or lower. It causes my drives to randomly and incorrectly show up as failed drives in my array during times of high disk activity.

    As soon as I stuck with 8.8, everything works perfectly, except for those stupid disk LED's. Every time I boot up, all the LED's are always flashing like it's rebuilding or something, but it's just the LED's. I just right click each drive once and click "Activate LED/Flash Drive's LED" and they all go away. Luckily I don't reboot often, but that's my workaround anyways.

    It took sending back a few drives to figure this one out. Might have something to do with the fact that they are all 1TB Samsung Spinpoint F1's. Not sure.

    I thought it was just my self built SuperMicro servers experiencing this problem.

    Another word of advice, use Raid10. It's WAY faster than any other raid.
  3. I forgot to mention, the new drive you just put in to replace the old one... open the storage matrix, right click that drive, and blink it's LED to make sure you got the right one selected, then click "Mark as spare". It will auto-rebuild after that.
    I have 6 drives, 4 in Raid 10, 1 marked as spare, and 1 for backups.

    The array will always rebuild itself on it's own if you got a spare in there.
  4. I hope he still doesnt need help after a year
  5. Thread is year old with no response back from original poster. There is no need to bring up an old / dead thread

    This topic has been closed by Tecmo34
Ask a new question

Read More

NAS / RAID Controller Storage