- Have 6 drives (3 mirrors) and one day one mirror where "lost".
- I cannot read from any of the lost mirror drives, I cannot access the data but I can see them in my Matrix console and bios.
I have been using different software to try to at least get out the software from one of the drives but it is impossible.
Anyone know any tools that can read or fix corrupt raid drives?
Matrix marks one of the drives with an X and say it failed. But the other drive should still be ok but I cannot read from that one either. Could the problem be that one drive got corrupt and then mirrored that to the other healthy one so one is damaged and the other mirror is gone because of mirroring the damaged one?
This is first time ever something wicked like this happen and I have been raiding for over 15 years now?!
I have read out the info from both drives separate and together now.
All I get is this
I 07/27/09 12:00:10 Active@ Partition Recovery DEMO Version
I 07/27/09 12:01:23 Device scan started on Hard Disk 2 (Intel Raid 1 Volume 232.9 GB)
I 07/27/09 12:01:23 Scanning Intel Raid 1 Volume ...
W 07/27/09 14:39:05 Bad (unreadable) sectors detected from 0 to 488390655 on Hard Disk 2.
I 07/27/09 14:39:05 Device scan completed [found 0 partitions and 0 files]
I mean how can drives in mirror go down like this at the same time?!
Any tools in particular that can fix unreadable sectors?
I recently encountered a similar problem with a RAID 1 array (two 500GB WD Caviar) on Intel Matrix Storage [Intel(R) 82801GR/GH SATA RAID Controller] in Win XP x86 SP2. And I fixed it. Although I can't imagine that this will help Hellpatrol, perhaps someone else will find the information useful.
Basically, what I discovered is that Intel Matrix Storage Console may not correctly report which member of a RAID 1 array has failed. In my case, I was immediately suspicious because the console reported the array status as failed with one drive (port 0) failed and one drive (port 1) normal. With a single drive failure, I'd expect the array status to be degraded, not failed.
When I replaced the "failed" drive, the console reported the replacement (port 0) as a non-RAID drive, the old RAID member (port 1) as normal, and one RAID member as missing.
After some deliberation, I installed the "failed" drive (which had been port 0 in the Win XP box) in a server running Win Server 2008 x64. I found that all of the data was available, and that the drive seemed OK. Although, in copying the data, I did encounter a few folders with missing properties, the files therein seemed OK.
At that point, I went ahead and broke the RAID array in the Win XP box. After doing so, I could see the data. However, the machine bluescreened when I attempted a backup.
Upon installing the "normal" drive (which had been port 1 in the Win XP box) in the server, I could immediately hear that something was very wrong with the drive. When I attempted to copy from it, it whined very loudly, and the transfer rate dropped dramatically after a few minutes. FWIW, CPU utilization and page fault rate appeared inversely proportional to transfer rate. I gave up after a few GB.
FWIW, I suspect that using WD Caviar drives in a RAID array was a dumb idea. So far, the Win XP box seems fine with RAID 1 on two 1 TB WD RE3 drives. With any luck, it'll be retired before the new drives fail.