Sign in with
Sign up | Sign in
Your question

Raid 5 Fileserver... need advice

Last response: in Storage
Share
October 9, 2006 8:27:34 PM

Hi I'm building a fileserver at home for playing media on other computers around my house, using 8x320GB IDE HDDs. Originally I bought a Netcell SR5000 which has 5 ide ports only to realise that each port only takes one device (I expected each to take 2, totalling 10 possible drives) I'm now wondering what you think the best solution for me to build the filerserver is.
I've been trying to decide between :
(1)A Highpoint Rocketraid 464 card which would take 8 HDDs but I have heard there is performance issues with it, and it has a size limitation.
(2)Buying a few promise ide cards for the 8 hdds and using software raid in windows xp

Speed is not the most important thing, all the computer will be used for is running bittorrent and playing movies from it over my gigabit network. Which solution would be more practical or do you have any other ideas?

More about : raid fileserver advice

October 9, 2006 9:32:27 PM

http://www.3ware.com/products/parallel_ata.asp

The 3Ware cards are pretty quick (probably quicker than the Highpoint) on RAID 5, but any parallel ATA RAID card is at least 2 generations behind current SATA RAID implementations, and therefore take a performance hit.

All parallel ATA RAID cards I've seen have limitations in the array size. They only support a maximum of 2TB in a single array. What you will have to do to get around this is the following:

1. Hook up the 8x 320GB drives.
2. Create two arrays of 4 drives each, (RAID0=1280GB each, RAID5=960GB each).
3. In Windows, create dynamic disk partitions on each individual array, and then set them up for spanning.

This will give you a single volume of 2560GB (RAID0), or 1920GB in RAID5.

Since the RAID5 size is smaller than 2TB (because it uses 2 drives for parity information in this configuration), you'd be better off setting up an 8-drive RAID5 (native 2240GB), and then letting the controller truncate that to 2048GB (2TB). Then you can create a basic volume on it instead of dynamic volumes.

Be aware than dynamic volumes can only expand you so far. (http://technet2.microsoft.com/WindowsServer/en/library/...). 64TB is the maximum that dynamic disks can do (32x 2TB volumes, and only if that is spanned (JBOD), striped (RAID-0), or RAID-5). Mirrored volumes (RAID-1) can only be 2TB.

These limitations apply regardless of the file system, even though NTFS itself can theoretically support 16 EB (2^64 bytes).
October 9, 2006 9:57:35 PM

Almost another sticky Joe. Some long forgotten knowledge for, good refresh.
Related resources
October 9, 2006 10:30:18 PM

Boy do I regret choosing PATA drives, thanks for the advice SomeJoe. Would you advise against the software raid option? I ask because wouldn't software raid not have the 2TB limit? Losing the ~200gb in truncation is not the end of the world but if I can keep it (and not take a huge hit in preformance) all the better. Will that 3ware card be sufficient for my use of the array? I have heard they are much better than the highpoint cards but I may have trouble locating one (in canada)
October 10, 2006 12:01:22 AM

The software RAID is essentially what I'm talking about doing to get around the 2TB limit. It's not actually doing any "RAID" functions in software, however -- it's just connecting the two hardware RAID volumes together so that they appear as a single drive letter.

You're going to have to do this anyway, regardless of whether the controller supports arrays >2TB or not. Dynamic disks are the only way that Windows supports volumes >2TB, and then only with spanning, striping, or RAID-5.

In other words, suppose you used SATA drives and a controller supporting >2TB arrays. The hardware would then report a logical disk to Windows that might be 4TB or whatever, but Windows won't let you create a single large partition on it. You still have to create multiple dynamic volumes of 2TB each on the logical drive, and span them together.

The only thing you lose with having to use a controller that doesn't support >2TB arrays is that if you want to use RAID-5 on the controller, you end up having to use a parity disk for every 2TB of the array, thus losing more disk space than you ordinarily would have.

Whether you're going to lose a lot of space or having to deal with the truncation depends on what RAID level you want to run. I assume that since this unit will be for home use, and you're in the 2TB range of data, that backup to another device is not an option. 8) In that case, I'd recommend running RAID-5 for protection from a single hard drive failure. RAID-5 demands a decent card for performance, which steers you towards the 3Ware. Using that card in that configuration steers you towards an 8-drive RAID-5 truncated to 2TB.
October 10, 2006 2:24:29 AM

Just for the sake of asking. What kind of buss is the card connected to. Normal PCI 1.2? Of so, performance may not be the issue since PCI buss is limited.

PCI X?

Just wondering.
October 10, 2006 2:37:06 AM

The 3Ware 7500 series is 64-bit PCI-X. However, it is backward compatible with PCI 2.2 (32-bit) slots (the extra card edge just hangs over the edge of the PCI 32-bit slot).

The transfer rate is limited on 32-bit PCI to the speed of the PCI bus (typically 90-110 MB/sec after taking into account system overhead).

Be aware that not all PCI-X cards are backward compatible like this. For instance, the 3Ware 9550SX series of PCI-X SATA controllers will only work in a true PCI-X slot.
October 10, 2006 2:51:49 AM

Thanks, this is what I meant by using software raid, and I would just use pci to ide cards, this way I save money on the raid controller, would doing this method of raid be inferior to the 3ware card?
October 10, 2006 3:25:01 AM

For JBOD, no -- Windows can do that just as well as a hardware card. For RAID-0 or RAID-1, it's somewhat slower than a hardware card, but not much if you're talking a few disks on a PCI card. (Once you get into 4+ drive RAID-0's on a PCI-X or PCIe card, then the hardware cards definitely pull ahead of Windows).

For RAID-5 though, I wouldn't recommend software RAID at all. The CPU overhead is high, the transfer rates are low.

But, it's up to you whether you actually need RAID-5. Depends on how important the data is, and what you would have to do if you lost it. Also depends on what you might estimate to be the reliability of your drives. If you have enterprise-class drives with 1M+ hours MTBF, and an enclosure that keeps them cool, then RAID-0 or JBOD might be a low-enough risk to work fine. Previously-used desktop drives in a hot enclosure might not. You'll have to judge for yourself.

For me, when RAID arrays start to go past 2-3 physical disks, it starts to get into an area of risk that I'm not comfortable with unless the data is protected somehow -- either RAID-5, or backed up to another device, or both for mission-critical stuff.

I did a 4-drive RAID-0 one time and nearly lost about 500GB of unreplaceable data. 8O Not gonna do that again.
October 10, 2006 10:37:49 PM

Well thanks for all your help somejoe. I'm going to purchase a 3ware card and setup the 8 drives on raid 5 with two 80gb drives on raid 1(onboard controller) The 2TB truncation is a small price to pay I suppose, and who needs more (for personal use) anyway? Now to find a case for the whole thing.
October 11, 2006 1:20:07 AM

Quote:
Dynamic disks are the only way that Windows supports volumes >2TB, and then only with spanning, striping, or RAID-5.


This limit was removed in 2003 server SP1 and XP-64. I'd guess that it's not present in Vista, but that's just a guess at this time.

http://www.microsoft.com/whdc/device/storage/LUN_SP1.ms...
October 21, 2006 3:28:40 PM

I'm just curious .... I thought I read somewhere where you can use a common HD for the raid 5 stripping. In other words he would only need 7 drives as opponsed to 8 .... although now that I'm writing this isn't Raid 5 a 3 drive array not 4. 4 drive array sounds like a raid 6.
a b G Storage
October 21, 2006 4:00:34 PM

Raid 5 is N data drives and 1 drive for parity information, and raid 6 is N drives with 2 for parity information I believe.

In you can have a n+1 Raid 5 array and a n+2 Raid 6 array where the +1/+2 is the number of parity drives. I'm not sure what the limit is, but could have an 8 drive array where 1 is parity in a raid 5 configuration, and you would have (8-1)xsize of space, 7x320 = 2240Gb, if it were raid 6 (8-2)xsize = 6x320 = 1920 GB
October 21, 2006 5:44:44 PM

Thats it in a nutshell..

For a more detailed explanation you can do worse than have a read through wiki-RAID

For the situation the OP describes though do you really need a 2TB volume? Or could you live with 2 or more smaller volumes? The management of such large volumes becomes an issue in its own right.
!