Sign in with
Sign up | Sign in
Your question

p5b:how many drives can die in raid 10 ? what about raid 01?

Last response: in Storage
Share
April 21, 2007 5:30:18 PM

well i have a p5b duluxe motherboard that has raid 10 ( it says it has that at least). I think the motherboard company made a mistake and placed raid 01 when it was supposed to be raid 10. aka the company lied.

my question: how many drives must fail in raid 10 to make the whole array fail?

how many drives must fail in raid 01 to make the whole array fail?

More about : p5b drives die raid raid

April 21, 2007 7:41:12 PM

The P5B Deluxe motherboard has 2 RAID controllers. One is the Intel ICH8 south bridge, part of the P965 chipset. It supports RAID 0, 1, 5, and 10. This is a true RAID 10, not a RAID 0+1. The other RAID controller is a 2-port JMicron controller, supporting RAID 0, 1, and JBOD.

In both RAID 10 and RAID 0+1, the number of drives that must fail to cause the entire array to be lost varies depending on which drives in the array fail. In all cases, a minimum of 2 failed drives can cause failure of the entire array. In the best failure case, up to 1/2 the drives in the array can fail and the array still be operational.

RAID 10 first pairs drives into RAID 1 groups, and then stripes across the RAID 1s. In RAID 10, you can theoretically lose 1/2 of the drives and not lose the array if all the failed drives are each one of the drives in their 2-drive RAID 1 group. If any entire RAID 1 group fails, the entire array is lost.

RAID 0+1 first stripes across 1/2 of the drives in the array to make a RAID 0. It then mirrors the entire RAID 0 to the other 1/2 of the drives in the array. You can theoretically lose 1/2 of the drives without losing the array if all the failed drives are confined to one RAID 0 group. If two drives fail where each is in a different RAID 0 group, the entire array is lost.

It's unlikely that you could experience two drive failures in RAID 0+1 that isolate both failures to the same RAID 0 group. The reason for this is that when the first drive goes down, the entire RAID 0 group it was in is lost. Thus, the other drives in that RAID 0 group won't even be operational. The only operating drives still working in the array are in the other RAID 0 group, making the 2nd failure more likely to occur there, which will then lose the entire array.

This is not the case for RAID 10. A single drive failure makes that particular RAID 1 pair non-redundant, but only the remaining operating drive in that RAID 1 pair is the single point of failure. Any other operating drive in other RAID 1 groups can fail and the array remains intact.

RAID 10 also outperforms RAID 0+1 during the rebuild operation following a drive failure and replacement. In RAID 0+1, all drives in the entire array have to participate in the rebuild operation (since the rebuild re-mirrors one RAID 0 to the other). By contrast, in RAID 10, only the other drive in the RAID 1 pair participates in the rebuild; other drives in the array have nothing to do with it.
April 21, 2007 9:25:22 PM

Quote:
The P5B Deluxe motherboard has 2 RAID controllers. One is the Intel ICH8 south bridge, part of the P965 chipset. It supports RAID 0, 1, 5, and 10. This is a true RAID 10, not a RAID 0+1. The other RAID controller is a 2-port JMicron controller, supporting RAID 0, 1, and JBOD.

In both RAID 10 and RAID 0+1, the number of drives that must fail to cause the entire array to be lost varies depending on which drives in the array fail. In all cases, a minimum of 2 failed drives can cause failure of the entire array. In the best failure case, up to 1/2 the drives in the array can fail and the array still be operational.

RAID 10 first pairs drives into RAID 1 groups, and then stripes across the RAID 1s. In RAID 10, you can theoretically lose 1/2 of the drives and not lose the array if all the failed drives are each one of the drives in their 2-drive RAID 1 group. If any entire RAID 1 group fails, the entire array is lost.

RAID 0+1 first stripes across 1/2 of the drives in the array to make a RAID 0. It then mirrors the entire RAID 0 to the other 1/2 of the drives in the array. You can theoretically lose 1/2 of the drives without losing the array if all the failed drives are confined to one RAID 0 group. If two drives fail where each is in a different RAID 0 group, the entire array is lost.

It's unlikely that you could experience two drive failures in RAID 0+1 that isolate both failures to the same RAID 0 group. The reason for this is that when the first drive goes down, the entire RAID 0 group it was in is lost. Thus, the other drives in that RAID 0 group won't even be operational. The only operating drives still working in the array are in the other RAID 0 group, making the 2nd failure more likely to occur there, which will then lose the entire array.

This is not the case for RAID 10. A single drive failure makes that particular RAID 1 pair non-redundant, but only the remaining operating drive in that RAID 1 pair is the single point of failure. Any other operating drive in other RAID 1 groups can fail and the array remains intact.

RAID 10 also outperforms RAID 0+1 during the rebuild operation following a drive failure and replacement. In RAID 0+1, all drives in the entire array have to participate in the rebuild operation (since the rebuild re-mirrors one RAID 0 to the other). By contrast, in RAID 10, only the other drive in the RAID 1 pair participates in the rebuild; other drives in the array have nothing to do with it.


how do i actually know if its raid 10 ( true raid 10) ? when i boot up it says

raid 10 ( 0+1 )

which confuses me...

i will take a picture of it soon.
April 21, 2007 10:05:41 PM

you can loose up to 2 drives in either a 10 or 0+1, in a raid the there are 2 mirrors that are stripped AA and BB you can loose one drive from the A mirror and one drive from the B mirror. In a raid 0+1 there are two stripes that are mirrored AB and AB and you can loose either set of AB.
!