RAID system supports more than 2 node failure


I have a question about RAID/NAS redundancy. Why there is no RAID support 3 or more node failure recovery? I read Prof. James Plank papers and recent erasure code technology. It seems that technically and commercially they are very plausible.

I have done researches recently on implementing real time RS encoding and decoding in hardware. I see there is no technical bottleneck to design hardware for a RAID system to support 3 or 4 node failure recovery.

I am soliciting experts' opinions about RAID which can be more robust and sustainable to failures.

Thanks before hand,

7 answers Last reply Best Answer
More about raid system supports node failure
  1. It comes down to cost and wasted drive space. Many larger companies have off-site mirrors of their entire data centers. The likely hood of 2 drives failing in an array at the same time is high enough that most enterprise supports RAID 6. However, it would be extremely rare if you had a raid 6 then a RAID 1 of the RAID6 for more than 2 drives in one array to fail on both arrays at the same time. Then if you did an off site mirror of this configuration, the odd become extremely low.
  2. 3 or 4 node failure recovery alread exists in the form of RAID 1E. With 7 disks, you can sustain 3 non adjacent failures in 1 array. Theoretically, this can be scaled upwards for a greater number of drive failures. The disadvantage is of course capacity overhead. I would have concerns on the increased overhead and performance loss from even more parity read read write write cycles. RAID 6 is bad enough as it is.
  3. Plus, with a single controller, you still have a single point of failure. While controllers are more reliable and resilient than Hard Drives, trying to build a fault tolerant system should ideally encompass redundancy on everything from PSU to CPU to Controller to Hard Drives.

    In practice, it is more practical to go with RAID 1 or RAID 5 and rely on OFFSITE backups to protect you from anything more severe than a single disk failure.

    After all, the biggest risk is that the machine will crash and burn (or the house will, or it will get stolen etc) and having 37 node failure recovery wouldn't help with that.
  4. All your comments are very enlightening. I learn lots of new stuffs from your answers. Followings are some links I find to follow up the discussions.

    It seems RAID 1E catch eyes recently; yet it is still not a real m+n (n > 2 parity/redundancy node) error correction system, it is mix of RAID 5, 1, 0 etc. to function as striped mirroring, enhanced mirroring, hybrid mirroring. I think the 3 node failure in 1E is conditional, not for any 3 nodes.

    What I am trying to explore are kinds of breaking the rules -- using erasure codes to allow more any node, larger than 2 (3, ... 6,...) node failure. For example, in 6+6 node mode, any 6 nodes can be failed, and the system is still working. I see the grid data center and IPTV also find this erasure codes applicable, there are optimized EC software to work on it. I just figure out that implement it in pure hardware is not a rocket science as well... RAID performance normally requires hardware Reed Solomn codec...


    Followings are some links about RAID discussions.

    RAID 1E (striped mirroring, enhanced mirroring, hybrid mirroring)


    View Beta Profile
    Aspiring Evangelist

    Join Date: Oct 2007
    Location: GA
    Posts: 375
    I'd never heard of Raid 1E before but reading through I'm not sure it's an alternative to RAID5. You're still basically mirroring so losing half of your raw drive space.

    The benefit of Raid5 (or preferable Raid6 when you start talking larger raid groups) is that you only sacrifice a much smaller amount of the raw space for redundancy (with the obvious cost to performance).

    The only benefit it appears to offer over Raid 10 is that you can have an odd number of drives. This doesn't seem like too much of a benefit as, as far as I can tell, it wouldn't be possible to add another individual drive to a raid group without restriping all your data to take advantage of it?
  5. Yeah, I was only half thinkin' when I posted the RAID 1E comment ( about 2 AM and a good 12-pack in me :) ), I know it's not parity. Interesting reading for you here though...

    And that's why I don't use RAID 5 or 6. I know the artcle is slightly dated, but the specs for 5 & 6 haven't changed, and I've yet to here of anyone disputing this. In fact, on most profesional sites, this paper is often linked as a warning against using this form of parity RAID. Someone please dispute this...I would love to be wrong here.

    Now if you could improve upon the RAID 3 standard ( which is decidedly unpopular right now ) to improve performance and redundancy, I would be VERY intrigued.
  6. A few points...

    1. IDE drives do not report errors that can be corrected to the OS, but they are stored and can be retrieved so they should provide warning of impending problems. The user does need to check these regularly, but this can probably be automated easily enough.

    2. RAID 5's performance can be comparable with any other system if you use a hardware RAID controller that calculates the parity itself. You should also have a decent UPS and this will provide close to 100% security of your data.

    3. RAID 6 is basically RAID 5 on steroids... it allows any two disks to fail without any loss of data. In conjunction with hardware parity calculations and a decent UPS, your main risk becomes the house burning down or the computer being stolen.

    4. Striping (anything with RAID 0 involved) is a great performance booster, but it is not remotely resilient. Linking this with Mirroring (RAID 1) is a way to improve the reliability, but it can result in losing on both sides. Plus, you cannot be sure that you can recover all of the data if two disks fail... it depends upon which two go.
  7. Best answer
    Point #1.... It is my understanding that this appies to SATA as well. Correct ?

    regardless if it's reported or not, I have had drives ( SCSI in this case ) go from no unusual errors to full blown dead in 3 days time. On day 3 I knew the drive to be failing before it actually failed ( heard the clicks of death ). Because it had not yet reached threshold limit, the RAID 5 array continued to function. I immediatly manualy took it off line and rebuilt with a hot spare. The rebuild completed succefully, but I experienced several corrupted files and a completely dis-organized directory structure. At the time I attributed to one of life's RAID rebuilding mysteries, but a year or so later stumbled across that paper I linked too. Further investigation showed this could likely be the culprit. So, having several old questional drives lying around, I built another RAID 5 to force the problem to occur. In this case I was using a different controller card than the first time also. Low and behold, the exact same problem---A degraded RAID set, followed by a not-even-close to be perfect rebuild. As this was a totally different machine with differnet hardware, I attributed this problem to a partially failing drive and RAID 5 as the article suggests. I have had arrays crap-out on me at other times caused by a completely dead drive, and the rebuild process worked just fine. This was a about 1-2 years ago and since then I've been paying attention. I have seen many reported problems with RAID 5 attributed to "noobs" which could have easily been this same phenomenon. I have never had any issue with any other RAID level. RAID 6, in theory, would actually be more likely to experience this problem. So in my case, RAID 5 was definately not near 100% security, and both instances were on battery backed cache hardware enterprise level ( dated now to be sure :) ) controller cards.

    Sorry to be slightly off-topic...It's just always bugged me.

    On a side note, I got myself thinking about RAID 3 & 4 and using a RAID 0 pair of SSD's for the dedicated parity drive to offset the performance loss along with standard HDD's for the stripe set portion. I haven't finished with the thought experiment yet, but the idea seems promising. Neither RAID 3 or 4 suffer from the above phenomenon due to read checks before parity write.
Ask a new question

Read More

NAS / RAID Data Recovery Support Storage