Sign in with
Sign up | Sign in
Your question

RAID 10 failed

Last response: in Storage
Share
November 27, 2012 12:42:06 PM

Hello all, I have a RAID 10 issue I was hoping someone out there can help with. I will start with the system:

Gigabyte GA-P35C-DS3R Motherboard
4 Gigs RAM
Q6600 CPU
4 SATA Maxtor 250 Gig drives
2 IDE DVD Burners
1 OCZ SSD
Zotac nVidia GTX 560

Windows Home Premium

System has run fine for over 3 years now (was originally Windows XP OS)

System is setup with the 4 Maxtor drive set for raid 10, 2 partitions (200 Gig and 232 Gig), Over the years I have lost 2 drives, which was no problem with the raid 10, just pull the dead drive, replace, auto rebuild, nice safe and secure. Well today 1 of the drives somehow became disassociated with the raid (drive 0), which is still fine with raid 10. now for some reason my boot partition is coming up as failed, while the second just says rebuild. I don't understand how one can say failed, while one says rebuild, they both run off the same 4 drives. Of course all the info/data I need is on the failed partition. Is there anything I can do to get this data back? Currently it still lists 3 of the drives as being in the raid, 1 as a non raid drive, never had more then 1 drive missing from the raid so don't understand the fail.

Thank you for any help can give.

Rich

More about : raid failed

November 27, 2012 2:07:47 PM

when drive 0 dropped from the array it couldn't support RAID 1+0 anymore and curropted. you need 4 drives for RAID10. the problem with RAID is that unless you are using RAID 1 it is damn difficult to recover data. you are looking at a really expensive professional job here. you can try the two drives that were mirrioring and see if the data is there if not...
m
0
l
a c 80 G Storage
November 27, 2012 3:45:11 PM

I hope you have a good backup!
m
0
l
Related resources
a b G Storage
November 27, 2012 3:58:17 PM

No luck just adding the drive that dropped out back in? Had that happen on a RAID1 and it just did an integrity check and was good to go. You are right, losing one drive in RAID10 shouldn't matter.

Wish I had more insight.
m
0
l
December 3, 2012 11:57:22 PM

J_E_D_70 said:
No luck just adding the drive that dropped out back in? Had that happen on a RAID1 and it just did an integrity check and was good to go. You are right, losing one drive in RAID10 shouldn't matter.

Wish I had more insight.


I see no option to add a drive back in to the raid. Maybe a couple screen shots can help me explain my problem.

First Boot screen :
" alt="" class="imgLz frmImg " />


once I go into RAID BIOS:

" alt="" class="imgLz frmImg " />

I don't understand why my boot drive is failed, but second partition is degraded. With three drives still in the RAID 10 they should both just be degraded. If both partitions are on the same 4 (3 now) physical disks how is one failed. How would I re add the non-Raid Disk. Do I need to seek professional help to recover data from the failed partition?
m
0
l
a b G Storage
December 4, 2012 10:55:59 AM

Now I'm getting confused (happens to me a lot). If both volumes are mirrors, why is vol 0 not the same size as vol 1 (265.8GB)?

Looks like you are mirrored at the lower level and striped at the higher level, right?

At any rate, could try to delete volume zero, then add the two disks back in. No idea what this will do to your data.
m
0
l
December 4, 2012 1:18:28 PM

J_E_D_70 said:
Now I'm getting confused (happens to me a lot). If both volumes are mirrors, why is vol 0 not the same size as vol 1 (265.8GB)?

Looks like you are mirrored at the lower level and striped at the higher level, right?

At any rate, could try to delete volume zero, then add the two disks back in. No idea what this will do to your data.



Sorry if this reply comes out looking strange, typing from phone.

Volume 0 is my C: partition , Volume 1 is my D: partition. Both partitions are stripped and mirrored across the same 4 physical hard disks. This is what makes me confused as to how one volume is failed and not both because both volumes are spread across the same 4 physical disks. I would just wipe the whole thing and start fresh, but I am trying to get info off of volume 0 which is failed.

Thanks again for any help you can provide.
m
0
l
a b G Storage
December 4, 2012 7:29:10 PM

Ah - think there's confusion between your logical c: and d: partitiions and the raid controller's vol 0 and vol 1 arrays. Those things are not related.

I can't get to your images from where I am at so trying this from memory. If you are striping at the lower level (disk 0&1 and disk 2&3) then mirroring at the higher level, losing drive 0 (for whatever reason) will fail half of the mirror and leave the other half degraded since it can't support a failure of one of those two drives (since they are striped). All your data is still there (on disk 2&3 stripe).

Have you tried just disconnecting disk 0, powering up to the raid warning section, then powering down, reconnecting disk 0, then powering back up to try to get it to recover? It should rebuild the other half of the 0/1 stripe from the 2/3 mirror side.

I'm really reaching here and could use someone else weighing in :)  Also, don't hold me liable for any data loss. These are just suggestions you need to research and decide upon of your own volition...
m
0
l
December 4, 2012 9:41:43 PM

J_E_D_70 said:
Ah - think there's confusion between your logical c: and d: partitiions and the raid controller's vol 0 and vol 1 arrays. Those things are not related.

I can't get to your images from where I am at so trying this from memory. If you are striping at the lower level (disk 0&1 and disk 2&3) then mirroring at the higher level, losing drive 0 (for whatever reason) will fail half of the mirror and leave the other half degraded since it can't support a failure of one of those two drives (since they are striped). All your data is still there (on disk 2&3 stripe).

Have you tried just disconnecting disk 0, powering up to the raid warning section, then powering down, reconnecting disk 0, then powering back up to try to get it to recover? It should rebuild the other half of the 0/1 stripe from the 2/3 mirror side.

I'm really reaching here and could use someone else weighing in :)  Also, don't hold me liable for any data loss. These are just suggestions you need to research and decide upon of your own volition...



I have 4 physical disks in the machine, all 4 are part of one raid 10 as defined by the RAID BIOS. When I setup Windows (originally XP, then upgraded to Windows 7) I subdivided they entire array of (made up of 4 250 gig seagate disks) 465 gigs into a 200 gig partition (C: as boot drive)and a 265 gig partition (D: as storage drive). I have lost individual disks before (3 of them over a 4 year period, which I would just pull, send back to Seagate for replacement, receive replacement, pop it back in, reboot, it would rebuild automatically). I have tried disconnecting the unassociated disk, then reconnecting it with no re-association or rebuild. The drive seems fine, no drive failure. And don't worry, I don't hold anyone responsible my data lose but me, I am the one who did not backup regularly, I am the one who may have lost the only copies of my kid's baby pictures.

Rich
m
0
l
a b G Storage
December 5, 2012 3:58:48 AM

OK, I think I see what you did there (just never seen it done that way - sorry!). Usually seen one volume at raid level then partitioned in the OS.

Still go back to no way one drive dropping out should be able to hose either volume. Just doesn't make any sense. Realize this doesn't help you.

Could you do a windows install on a new drive then copy the contents of D: off to an external? Might be able to use Raid to Raid or a program like that to mount both volumes even if the controller thinks they're failed and recover it all.
m
0
l
a b G Storage
December 5, 2012 10:33:04 AM

Seen some posts where folks have booted to a linux LiveCD and mounted a supposedly failed array and got their data back. Might be worth a shot.
m
0
l
December 5, 2012 11:13:56 AM

J_E_D_70 said:
OK, I think I see what you did there (just never seen it done that way - sorry!). Usually seen one volume at raid level then partitioned in the OS.

Still go back to no way one drive dropping out should be able to hose either volume. Just doesn't make any sense. Realize this doesn't help you.

Could you do a windows install on a new drive then copy the contents of D: off to an external? Might be able to use Raid to Raid or a program like that to mount both volumes even if the controller thinks they're failed and recover it all.


I have already gotten all my data off the D: drive by booting up with an ubuntu live CD and using an external USB drive to copy everything off. I have a friend who is much better in the Linux world then me stopping over tonight to see if we can mount the failed drive under ubuntu to get the files off.
m
0
l
December 8, 2012 11:51:26 AM

Hello klondike686,

I just setup a new computer with a similar configuration of yours.

I the process I learned a lot about this kind of motherboard raids and was just searching about how to recover when the motherboard fails instead of disks when I came along your post.

What I understand about this technology so far is that the menu you posted with your screenshots is the bios of the "raid controller" which is only used to set the desired raid you would like to use.
This is NOT a hardware raid solution which means that all other configurations is done via windows and the Rapid Storage Technology Software from Intel. This Tech is also known as Fake raid.

It is different from software raid as it used an onboard bios to set configuration parameters and does allow Raid10 other than windows software raid which only supports up to raid5.

This means in order to recover you need a running windows with the rapid storage manager software installed.

But let us first have a look at your situation. Raid10 means that you have a combination of raid0 stripe (=performance) and raid1 mirror (=security).

In your case one disk failed and your raidset is now stating that "one disk" it is degraded which simply means since data can not be split onto two drives (since one is not working) during write the performance is degraded until the failed drive is replaced.

So from what I understood you simple have to replace the failed disk. What I don´t understand from what you described before - what is the difference in situation between the currently failed disk and the failed disks you had before?
m
0
l
!