I've just purchased a large rackmount server with 12 hotswap bays, and I'm debating between storage methods so would like peoples feedback on what is best. I'm going to be using Sata drives because of price and size. I don't want raid 5 because of it's single drive only redundancy (Chances are with 12 drives in an array there could be 2 that fail) and I haven't found any cards that support raid 6 so I was debating:
Raid 1+0: Provides good redundancy with failures of drives but with a larger overhead over raid 5/6. Also requires all drives to be same size, so when adding drives in the future I cannot take advantage of larger capacities.
Raid 5: Provides single drive redundancy which is not good enough for me on a large array. But allows further expansion by allowing any sized drives to be used thus future proofing it somewhat.
Raid 6: Provides 2 drive redundancy which is suitable for me, but no adaptors I have found supports it. But allows further expansion by allowing any sized drives to be used thus future proofing it somewhat.
JBOD + Disk replication: I haven't had much experience with Server 2003's replication so I don't know if it is suitable for me.
Raid 1 (Pairs) + Spanned (Not striped): I don't know what controllers support this but so long as when I add further drives to the array they can be of larger capacities and utilizing the full size (Obviously the mirrored pair would have to be the same size). I know that spanned does not provide a speed boost but spanned would be fine as I do not need the extra speed boost of Raid 0 in these circumstances.
Sooo.....That's the ideas I came up with, if you have any more or have opinions about the above I'd love to hear it!
What are you going to be storing on this RAID array? If it's write intensive, and especially if it's a write intensive database, RAID 5 or 6 is a very bad idea because of the poor write performance with these organizations.
Configuring a hot spare will reduce the risk of data loss if the array needs to be operated for periods of time when it's unattended (if a drive fails then the hot spare will be swapped into the RAID set and the data rebuilt immediately instead of waiting for someone to come in and replace a drive).
Be sure that you test your replacement and recovery procedures BEFORE you commit live data to the RAID sets. Every RAID system has little quirks and if you wait until a drive failure to discover them then your data is in peril.
Don't forget about backup. Even with redundancy your data is not safe unless you back it up to external media. Good backup practices are to have at least TWO offline copies of your data, at least ONE of which is stored offsite.
It's definently not write intensive, it's more of a data archive/vault so raid 6 would be fine (I finally found a few cards that support it but they are a little out of my price range) if it wasn't so expensive. An offsite backup for the data isn't really practical due to various reasons, so I'd rather have an onsite option. The data isn't critical and the loss of some of it wouldn't cause a lot of problems other than being a pain in my @$$ recreating it, hence the reason a mirror etc. would be handy (And spanned together so I don't end up with 10 seperate drives)
While looking at RAID; have you looked at the ZFS filesystem also? What is your experience with operating systems besides windows?
While you're counting on redundancy, this may not be enough to ensure your data. ZFS can store some files more than once, as it does with its metadata; it keeps copies of that on different disks. This means ZFS protects you against more dangers than just RAID can, and its so flexible and nice to work with.
If you have serious storage plans, ZFS should at least be considered. If you're new to ZFS you can try its wikipedia page for a start: http://en.wikipedia.org/wiki/ZFS
Note that ZFS is able to do both RAID5 (RAID-Z) and RAID6 (RAID-Z2) so it may be a real alternative for you. Should you implement this plan, i do recommend you keep the backup on a different filesystem though, as ZFS still has to do some maturing. Still its a great piece of technology and everyone interested into storage should check it out.
Personally i'm waiting for FreeBSD 8.0 to be released, with the latest and greatest version of ZFS this is something i'm going to be upgrading to. ZFS is not available on Windows though, so it may not be an option for you anyway. It's available on FreeBSD, OpenSolaris, Linux and derivatives like FreeNAS. With ZFS being 100% software these is no need to buy an expensive controller as the software can do a better job than most hardware RAID can.
8x SAS/SATA cards with RAID 6 support are around $400 new and ~$250 used.
While RAID 5E (from last generation) are ~$150 used on eBay. All cards are PCIe of course.
I'm in a similar situation as you actually. I need Windows Server, but still want the superiority of ZFS and RAID-Z which sub mesa mentioned. Something I've wanted to try is virtualise FreeNAS (or any other distro support it) on Windows. Right now I'm using a Dell PERC 5/i (supports RAID 5E) which can be found for ~$140 once BBU+cables been factored in.
The draw back with RAID-Z is once the array has been created there's no possible chance of array expansion. So if you need more space, either destroy the current array (backing up data to somewhere else) or buy another set of array with larger capacity disks to replace (or in addition) the current.