Sign in with
Sign up | Sign in
Your question

RAID 6 on new i7 920 build

Last response: in Storage
Share
March 5, 2009 6:33:09 AM

I am in the process of specing out the build for a new PC. My last one has lasted 11 years with a CPU and MB replacement about 6 years ago so I tend to prefer to build on the trailing edge of the bleeding one.

Thus, the i7 920 seems to be the sweet spot which should give me a good few years of use, including the option to overclock it in a while if I need an extra boost but cannot yet justify a replacement.

I am considering getting maybe six 640GB Caviar Black Hard Drives and RAID SIXing them. I am not good at doing regular backups (although I intend to get a case with a front mounted e-SATA port for my 1TB external drive) and it strikes me that being able to recover from TWO dead hard drives is a nice precaution.

However, I am wondering what the impact will be on performance. What elements of performance will RAID 6 help and hurt ? Access Speed, Read performance, Write performance etc. Should I get a hardware RAID card or is that unnecessary for an i7 ?

Also, do all the RAID 6 controllers (software and hardware) use the same format to store the data so that I can just rip the disks out and put them in another system with a (different) RAID 6 controller and be able to read them immediately if the old controller dies ?

More about : raid 920 build

a b G Storage
March 5, 2009 12:12:07 PM

Siggy19 said:
However, I am wondering what the impact will be on performance. What elements of performance will RAID 6 help and hurt ? Access Speed, Read performance, Write performance etc. Should I get a hardware RAID card or is that unnecessary for an i7 ?

Considering that any mobo capable of running an i7 uses the X58 chipset with the ICH10R southbridge, which is limited to RAID 0/1/5/10, you have no choice but to use a hardware RAID controller if you want a RAID6 array.
Given that you will need a seperate controller and can not use the onboard ICH10R, and considering that you want a RAID6 array, I highly recommend a hardware controller with an onboard processor (3Ware, Areca, Highpoint are brands to look for) and not a pseudo-software RAID controller. Also, given that you want to connect 6 SATA drives, you are looking at a controller with at least 8 ports. Lastly, what type of slot will the controller connect to; PCI-X or PCI-e? Keep in mind that controllers with these features are not cheap.

Some examples include:
Areca ARC-1220 @ $430
Areca ARC-1120 @ $450
3Ware 9650SE-8LPML @ $495
Siggy19 said:
Also, do all the RAID 6 controllers (software and hardware) use the same format to store the data so that I can just rip the disks out and put them in another system with a (different) RAID 6 controller and be able to read them immediately if the old controller dies ?

Generally speaking, if a RAID controller fails, then you need an exact replacement in order to recognize the array. Each hardware controller card uses it's own "format" and is typically proprietary to the controller manufacturer. You *might* be able to take a RAID array from one 3Ware controller and move it to another 3Ware controller and still have it recognize the array; but I doubt very much that you can take an array fro an Areca controller and move it successfully to a 3Ware controller. Please note that I have never attempted such a migration and don't know for sure, but I do know that there are RAID migration and disk imaging tools/utilities that could possibly help. Truly tho, if you spend the money on a quality brand name hardware controller, it is very unlikely that the controller card itself will fail.

TBH...I would recommend going with RAID5 and configuring one or two hot swap drives if you are worried about losing a drive...but that's just me...good luck!
March 5, 2009 12:57:41 PM

Thanks for your advice... it makes sense.

I had thought that RAID 6 might be functionally the same as RAID 5 with the hot swappable drive and so why not use all the disks since you already have them. The practical side of having to use a HW controller makes this pointless.

And with the controller being a single point of failure... what's the point in having two backup hard drives ?

I begin to understand why so many people seem to advocate RAID 0 + 1... performance and redundancy.
Related resources
March 5, 2009 5:44:55 PM

Siggy19 said:
I had thought that RAID 6 might be functionally the same as RAID 5 with the hot swappable drive and so why not use all the disks since you already have them. The practical side of having to use a HW controller makes this pointless.

And with the controller being a single point of failure... what's the point in having two backup hard drives ?

I begin to understand why so many people seem to advocate RAID 0 + 1... performance and redundancy.


the disadvantage of raid 6 is increased controller overhead calculating the second set of parity. if you are really really paranoid about losing data then raid 6 is for you. vs having to rebuild the array every now and again over the life of the raid 5. where i'd do raid 6 is if i was using multiple external enclosures in a single raid stripe (32 + disks) for under 10 disks i just do raid 5 and keep an offline spare ( on a shelf)


yes the controller is a single point of failure. however the odds of losing a hard drive are much much higher than the odds of losing a raid controller. things dont tend to fail ( without external reasons) when they have no moving parts.

Quote:
Each hardware controller card uses it's own "format" and is typically proprietary to the controller manufacturer. You *might* be able to take a RAID array from one 3Ware controller and move it to another 3Ware controller and still have it recognize the array; but I doubt very much that you can take an array fro an Areca controller and move it successfully to a 3Ware controller.


i've had good luck moving between different models of the dell/lsi ( dell rebrands lsi cards) cards. 3ware not so much. ive never seen a cross brand move work.
March 5, 2009 6:15:00 PM

Toms did an article a while back about migrating from one MB controller to another.

ICH7R -> 8R -> 9R worked ok although it was necessary to repair the O/S.
ICHx -> non-Intel SB didn't work.
non-Intel SB -> Intel SB didn't work.

So if you stick with the onboard controller (does the ICH10R controller even support 6 disks in RAID 5? ICH9R was limited to 4 disks per array), there's at least a decent chance you'll be able to put the same disks on a future MB and be able to read them.

Word of advice: If you use the onboard RAID, or if you use an add-on card w/out battery backup, PLEASE use a decent UPS. Otherwise, you'll be rebuilding your array with every power outage. And with 3+ TB of data, that rebuild is going to take several hours each time it happens.
March 7, 2009 4:03:50 AM

First. read my posts at the end of the RAID FAQ....
http://www.tomshardware.com/forum/43125-32-raid
There are much, much better uses for a 6 drive system than 1 big RAID 5.

Next, consider that on-board RAID 5 controllers are really just a software RAID cheat, and all the disadvantage of software RAID go with it. Without a dedicated XOR chip or battery-backed cache, MoBo RAID 5 should be avoided anyhow. If money permits, always choose hardware over software, just don't bother with the cheap cards.

I agree with TeraMedia about the battery backup, and this can be caused quite often by improper shut-downs and crashes, not just power losses.

Finally, I'm not sure about the ICH10R, but the ICH9R used RAID 0+1, not RAID 10 ( be careful to know for sure, sometimes they state RAID 10 when it's really RAID 0+1 ). RAID 10 is far superior from a redundancy standpoint and faster at rebuilds.

I've also had good luck with migrating LSI/Dell and Compaq. Even from a Compaq to a Dell controller and vise versa with no issues. Didn't even need a rebuild!

No one likes doing backups, everyone wishes they would have when their array crashes though :) 
April 9, 2010 3:42:42 PM

TeraMedia said:

Word of advice: If you use the onboard RAID, or if you use an add-on card w/out battery backup, PLEASE use a decent UPS. Otherwise, you'll be rebuilding your array with every power outage. And with 3+ TB of data, that rebuild is going to take several hours each time it happens.



What a HUGE oversight on my part. I had completely neglected this consideration until just now. Here's the thing though.. how do I choose a BBU (UPS). Add up my HDD, CPU, GPU..? And how do I know how much time they need to complete a write? Here's the crappy thing.. I know its ridiculous that I'm planning on running a raid drive in my apt - but, my gf blows the SAME circuit my PC will be on every other time she blow dries her hair! So, if I'm in the middle of a disk-write, it has to be a disk-write that is shorter than "x", where "x" is either the time to complete the write or get the power back on, yes? Economy IS a factor.. Hope you're still around to advise..

regards
April 9, 2010 6:25:55 PM

Disk writes should always finish within a minute or two... call it ten minutes at the very outside. Any decent BBU-UPS should provide that.

The only times you should NEED the 30 minutes or more BBU on a personal system is if you are particularly paranoid or if you live in an area with prolonged outages and might need to boot-up your PC during an outage...
!