Sign in with
Sign up | Sign in
Your question

RAID5 Options

Last response: in Storage
Share

Which Setup

Total: 9 votes (1 blank vote)

  • 6 Drive RAID5
  • 75 %
  • 2x 3Drive RAID5
  • 0 %
  • 6 Drive RAID1+0
  • 25 %
July 20, 2009 10:30:56 PM

So I will be getting 6x 1TB Seagate LP drives. Now comes question. I am using ICH8R controller and was wondering whats a good setup. I have the Options of essentially 6 Drive RAID5 (5TB total), 2x3 Drive RAID5 (4TB Total), or RAID1+0 (3TB Total). Since i like maximizing my dollar. I would like to put all 6 drives into a single RAID5. However, i am worried about data loss. Should one drive fail, Would the amount of data i have result in unrecoverable errors during the RAID rebuild? is the risk high enough to justify 2x3 Drive RAID5? Or are the drives large enough to warrant I do RAID1+0. Another reason why i am considering the RAID1+0 is that i like this to be one big drive... unfortunately I having trouble justifying my desire for one 1 single large drive is worth 1TB of HD space... (I don't think the ICH8R controller supports RAID6 (ideal), RAID5+0 or RAID5+JBOD)

Keep in mind, this is one big file server, I will have a HTPC and my primary desktop & netbook pulling off of it with recorded TV being uploaded (usually no more than 2 devices hitting at a time total AKA TV recording while im streaming Simpsons to my laptop). Wireless N currently more or less barely match 100mbit Ethernet and Gigabit Ethernet is theoretically capped at 125MB/s thus performance (which is what RAID1+0 is for) really isn't that big of an issue here.

File server will run Server 2k8 (MSFT dreamspark :) )

So what do all of ya think i should do?

If anyone cares, Specs:
PCP&C Silencer 500 PSU
GA-P965-DQ6
Radeon HD 4350
Pentium Dual Core e2220 (overkill i know)
No CD Drive
Seagate 7200.10 160GB as OS drive
Corsair 2GB DDR2 800 RAM

More about : raid5 options

July 21, 2009 12:11:13 AM

Use a dedicated controller...put five of the drives in RAID5 and one as a hot swap... DO NOT use an on-motherboard controller, you may not be able to move the array to a new motherboard if it goes belly up. Do a LOT more reading on the subject.
July 21, 2009 2:16:02 AM

marcellis22 said:
Use a dedicated controller...put five of the drives in RAID5 and one as a hot swap... DO NOT use an on-motherboard controller, you may not be able to move the array to a new motherboard if it goes belly up. Do a LOT more reading on the subject.


whats wrong with the onboard one? ICH is very compatible with similar devices. I have moved RAID 5 arrays from board to board (different manufacturers too) without hiccups. As long as its ICH7 or 8 or 9R (never tried 10, 6 or prior) the OS has never had an issue with a RAID volume moving around.

I understand what you are talking about, If i had 300 dollars to spare id do that, RAID5 or 6 with hot Spare (i think thats what u mean by hot swap) would be really nice but it still doesn't address my initial concern, probabilty of read error during RAID rebuild process. Unfortunately, I'm also trying to do this as cheaply as possible (the HDs being gifts) I had the board and cpu lying around from a retired system. Given what i have what do u think is best?

edit: What in particular should i be focused on reading?
Related resources
a b G Storage
July 21, 2009 8:44:33 AM

To get an unrecoverable error from today's desktop-class HDD is about 8(and over) in an array before you'll get one. So 6 in an array is ok. Enterprise-class with 5yr warranty will allow up to ~12 drives.

As whether or not to use ICHx-R for Intel Matrix RAID5, personally I have not had problem with it for around 5months when I was running it before moving onto hardware RAID5. But the issue of array corruption keeps popping up in this forum and others, less with Intel Matrix RAID, more with nVidia's RAID5.
The surest way of avoiding such problem is either to spend hundred(s) on hardware RAID5/6 card or to run ZFS and RAID-Z (in FreeNAS).
July 21, 2009 2:30:57 PM

wuzy said:
To get an unrecoverable error from today's desktop-class HDD is about 8(and over) in an array before you'll get one. So 6 in an array is ok. Enterprise-class with 5yr warranty will allow up to ~12 drives.

As whether or not to use ICHx-R for Intel Matrix RAID5, personally I have not had problem with it for around 5months when I was running it before moving onto hardware RAID5. But the issue of array corruption keeps popping up in this forum and others, less with Intel Matrix RAID, more with nVidia's RAID5.
The surest way of avoiding such problem is either to spend hundred(s) on hardware RAID5/6 card or to run ZFS and RAID-Z (in FreeNAS).


very insightful, thank you. However your numbers like ~8 and 12.. what size of drives are we talking about? If i had 6 80GB HDs, i seriously doubt error would come up during rebuild, but if i had 10TB drives, I wouldn't be nearly as confident.
a b G Storage
July 21, 2009 10:20:49 PM

I learnt a great deal about it from here. (Under: "Bit error during rebuild")

If you have to the time to read the entire blog, there's some very fascinating information in there.
July 22, 2009 1:29:31 PM

I've never used the ICH8R raid controller, but the other couple of raid controllers i've used onboard gave me problems. I had problems with the array being dropped randomly every 5 or 6 months, i had performance problems, i had rebuild problems... It wasn't really an issue for me then because it was mainly just a raid-0 i was messing around with for my OS to run from. Either way though, with 5TB of my data, I'd feel much better with a dedicated controller.

wuzy, thanks for the blog link. It was a very interesting read :) 
July 22, 2009 2:40:54 PM

interesting read there wusy, I think I'm gonna go with a RAID5 of all 6 drives and just external backup the irreplaceable stuff. Thank you all.

@arson94:
I have used intel's ICH line for a while, and i have yet to have an issue with them. Sure its software RAID, but its a file server, the dual core is gonna be kinda bored just calculating parity...
July 22, 2009 3:12:01 PM

If it's been working for you, then more power to you. I couldn't get the *** to work for me to save my life lol... Coincidentally, I could have just been using two flakey ass controllers... I would definitely raid5 all 6 drives though as you have said.
July 24, 2009 8:36:46 AM

Use a dedicated Areca raid card. They are very very good. Also don't touch highpoint with a 40 foot pole. I tried a highpoint two years ago and it killed all my drives. Have a look on ebay and you should be able to pick up a nice Areca card.
July 24, 2009 6:48:22 PM

With a dedicate raid card controller you'll have increase performance VS the on-motherboard controller.

If I were you I'd use 5HDD in raid5 config and keep the last drive as a backup/spare drive in case 1 of the HDD from the raid5 crash and needs to be replace.
a c 126 G Storage
July 25, 2009 12:34:19 PM

Traditional RAID5 with an obsolete filesystem might not be resistent against bit flips; data corruption that is expected over time as the data density becomes to large the disks don't know for sure what they are actually storing. With RAID5 you're screwed. As each disk may have just one sector that's unreadable, the RAID5 engine will kick each drive out of the array even if it can't read just one sector.

ZFS does things differently, and is superior to any traditional RAID or filesystem in this matter. It will not disconnect any disk, but use redundant information to fix all the bit errors. Best yet: you won't notice a thing since this happens automatically and because of checksums ZFS knows which data is good and which data is bad. It overwrites the bad data with good data, and it saved the whole array, with 0 disk disconnects.

Hurray for ZFS! Too bad few home users are using it. :) 
July 25, 2009 9:10:59 PM

sub mesa said:
Hurray for ZFS! Too bad few home users are using it. :) 


That sounds great, but how do you expect the average home user to use it if it's not even natively supported by Linux let alone Windows? Btrfs is supposed to be a free alternative but it's not done yet.

[well, apparently FreeNAS supports it. That might be the easiest way to get ZFS at this time.]
a c 126 G Storage
July 26, 2009 12:12:39 PM

I was indeed referring to FreeNAS, as this requires no Linux-specific knowledge and is configurable by web-interface. So its a very low threshold to try it out, even to (moderately advanced) Windows users. With the new version 0.7 supporting ZFS, this technology comes very close to ordinary users who may never have tried Linux, but can work with ZFS, the coolest and most advanced filesystem on the world i'd say.

ZFS is not a finished product though, it may even be considered experimental. But its so different from other filesystems (i don't know of filesystems with advanced internal RAID), that there is no substitute for at the moment. By using ZFS you discover how outdated the current filesystems actually are.
!