Donavan25

Distinguished
May 26, 2009
6
0
18,510
Hey everyone, this is my first post on these forums. So I’ll try to make it a good one!

I have been plagued with problems in the past with backup storage. My first USB external drive crashed four years ago and my WD MyBook World Edition also crashed last summer. Both times I have lost a lot of data that was very important to me. So I have decided to go for a RAID solution, making sure I have enough redundancy to recover from hard drive failures.

This is what I am looking for: I want a storage server that is redundant and would be able to recover from anything short of physical damage. I obviously need gigabit Ethernet, but I also need to be able to read and write from the hard drives fast enough to make use of the gigabit! I need it to be very easy to manage and have lots of control over who accesses what data in the array as well as security for keeping unknown people out. I need at least a couple of terabytes of storage with room to expand in the future. I also want to do all of this without breaking the bank.

My hardware choices like this:

Motherboard:
http://www.newegg.com/Product/Product.aspx?Item=N82E16813182151
I don’t feel that I need a server motherboard because the server will mostly be idling and the RAID controller will be handling the array. I like the super micro board because they also make server motherboards and this desktop board looks perfect for a low-end server.

CPU:
http://www.newegg.com/Product/Product.aspx?Item=N82E16819115206
I don’t think I need much just because the server will probably be idling most of the time.

RAM:
http://www.newegg.com/Product/Product.aspx?Item=N82E16820231098
Cheap, good reviews!

Case:
http://www.newegg.com/Product/Product.aspx?Item=N82E16811165028
I think that this case would be perfect because of the hot-swappable drive bays.

RAID Controller:
http://www.newegg.com/Product/Product.aspx?Item=N82E16816131004
I wanted RAID controller that would support RAID 6. I feel that larger hard drives do tend to fail quite a bit, especially under an increased amount of stress. If I had RAID 5 I’m afraid that after a hard drive fails (and it's replaced) and the array is rebuilding a drive, the increased stress would cause another drive to fail before the array is done rebuilding the first drive. I know I lose another whole hard drive to redundancy but feel that it’s not a bad deal because I never want this array to fail.

Hard Drives:
http://www.newegg.com/Product/Product.aspx?Item=N82E16822136284
I chose these drives because they are cheap and have great reviews. I would like to use 2TB drives, however, I don’t know if I could start off with 4 2TB drives and add more to expand my array without data loss and without rebuilding the array.

Also, there is an optional battery backup unit that can be purchased for the RAID card. Does anyone recommend this? My server wouldn’t be protected with a UPS so I think that the battery backup unit would be a good investment!

As for my OS, I was thinking of going to Windows Server 2003 64bit or 2008 64bit simply because I would also like to have this be my domain controller as well as a RADUIS server. I would install the OS on a drive directly attached to the motherboard and not have the operating system run off the array.

Random thoughts:
Hypothetically if my RIAD controller would fail, can I just hook another one up without having to rebuild my array? What happens if the replacement RAID card isn’t made by the same company? How does that work? What would happen to my data if my controller failed?

Hypothetically, could I just initially have four drives in the array and then expand by adding a hard drive without rebuilding the array and loosing data? How could this be done?

I guess what I’m looking for are suggestions. What could I do differently that would be better for my situation? Does everything look ok? Should I go ahead and dive into this project?
 

jrst

Distinguished
May 8, 2009
83
0
18,630
1. Either RAID-5 with hot spare or RAID-6 should be fine for <= 8 drives. Any decent RAID controller should support hot-sparing (regardless of RAID level).

2. Get a RAID controller with on-board BBU. I'd still recommend a UPS to provide for graceful shutdown, but if it's only on or the other, I'd go with the on-board BBU.

That said, a UPS only needs to keep the system up long enough for it to shut down properly. For what you're looking at, a UPS hold-up time of a couple minutes should be sufficient, which can be had pretty cheap. (However, a UPS won't save you in the event of an unexpected shutdown/crash/etc., which is why an on-board BBU is still desirable.)

3. If the RAID controller fails, you'll need a similar (possibly identical) model, and at minimum one from the same vendor. (I keep a spare because of that.) The vendor should be able to tell you what combination will work. Some also offer an advanced replacement option. (They ship you a replacement immediately on notification of a failure, and you send the old/defective unit back to them later.)

4. Any decent controller will support on-line capacity expansion (OCE). However, it can take quite a bit of time for large arrays... but unless you're able to backup everything and reload it, it's better than nothing.

5. The Areca controller you list has all of the required features. (Caveat: I haven't used Areca, but they appear to be competitive).

6. You'd be better off using RAID-qualified disks, such as the WD and Seagate enterprise drives or you're likely to suffer spurious dropouts/rebuilds. The RAID controller vendor should have a qualification list for both drives and backplanes (the latter shouldn't matter since the case is direct-connect, not really a backplane).

7. The performance limits aren't likely to be the drives, but the OS, network stack, protocol handlers, and network. With 4x RAID-6, the drives may be a limiting factor for writes, but not reads unless you're doing a lot of random IO; when you get to >= 6x drives, they'll likely outstrip the other factors. If you're planning on growing the array in the future, I'd go for lower power and more efficient drives such as the WD-RE3/4-GP's.

8. As for the OS... whatever you're most comfortable with, as you're likely going to be getting intimate with it. If you can make good use of Windows Server's other features, and your primary requirement is CIFS/SMB support, by all means... If you're looking for something that's more of a pure SAN/NAS and are a bit more adventurous, you might look at OpenFiler and FreeNAS, or Solaris w/ZFS (great performance, eliminates the need for a RAID card but you give up OCE).

9. Quite a few case options out there that can hold 8+ drives. The case you list is OK; primarily depends on whether you prefer pedestal or rack-mount.

You might also want to read this thread: http://www.tomshardware.com/forum/250101-32-raid-works-matter-what
 

sub mesa

Distinguished
If you would like to know the ZFS-route using FreeBSD, this thread may be of interest to you:
http://www.tomshardware.co.uk/forum/page-250073_14_0.html

ZFS may offer you redundancy on the filesystem level, instead of on the RAID-level. This opens up whole new possibilities for reliable storage. Check it out, but be aware that the high performance "kernel-level" implementations are only available in FreeBSD and OpenSolaris. Won't samba and other alternatives allow you to run a non-windows server?

If you want Windows, you pretty much obligate the usage of a hardware RAID controller. With ZFS you should not use hardware RAID but instead let ZFS manage the disks with its internal RAID-engine. ZFS can only work reliable if its close enough to the disks so that bio flush commands are supported and honoured.
 

jrst

Distinguished
May 8, 2009
83
0
18,630
If you're looking at a ZFS-based solution today, I'd strongly recommend, in order of preference: (1) Solaris; (2) OpenSolaris; (3) BSD. If you want a bullet-proof solution for ZFS with very good CIFS/SMB, NFS and iSCSI support, then (1), but pay close attention to the hardware qualification list (you want something that runs, and runs, and runs... or you want to screw around tweaking it?). If you're a bit more adventurous and are willing to go off the HQL, go with (2), but be prepared to spend more time getting it stable, depending on your hardware. If you're willing to take a walk on the wild side, you know what you're doing, and you're familiar with BSD, then choose (3) (ZFS on BSD is still a wart IMHO, but give it 6-12 mo. and it should be a contender).
 

sub mesa

Distinguished
Why would you recommend OpenSolaris over BSD? Both have the same ZFS version and code, and configuring FreeBSD is easy compared to Solaris. Any bugs that exist in FreeBSD's ZFS code, should also exist in the OpenSolaris code.

Older versions of FreeBSD implement ZFS filesystem version 6, while 7-STABLE and 8-CURRENT implement ZFS filesystem version 13, the same as offered on OpenSolaris. Only the zfs-crypt project is not ported yet. But it will when its labeled stable and targeted for integration.

Aside from that, i can think of no reason why you would seperate OS in order of preference, assuming it has the very same code regarding ZFS.
 

jrst

Distinguished
May 8, 2009
83
0
18,630
sub mesa -- It's not the ZFS core code per-se, it's the integration with the OS, how long it has been in use with that OS, and the size of the community using it for daily production. ZFS is still the "new kid" in FreeBSD. I hope and expect that 8.0 will make it an easier choice. (As to which OS is easier to configure... depends on what you're doing and what you're comfortable with... but that's a very different and highly subjective discussion.)
 

sub mesa

Distinguished
ZFS is a new kid in Solaris as well, and the only remaining bug is the kernel memory depletion bug which exists just as sure on Solaris. This bug is triggered on high load, instead of disk size or capacity. It would be illogical if home users would be affected with a properly configured system running ZFS on FreeBSD.

But i cannot identify the FreeBSD implementation being superior or inferior to that of Solaris. The number of people using a piece of technology is indeed a strong measure, but how can you measure this with an open source operating system? The only real indication of its usage is the number of ZFS related mailinglist items. And they are many. Everyone wants to try ZFS.

While you have made a good point that ZFS may not be mature enough so that all issues are resolved or even identified. Especially when using a filesystem its not a bad idea to be somewhat conservative. Yet, due to the long-overdue changes ZFS brings us, the difference between traditional filesystems is quite large. And ZFS adds reliability to your filesystem where others do not; the metadata checksums, the storage of metadata on multiple disks, etc. In theory, this is the kickass filesystem at the moment.

But if you are going to trust ZFS, should you trust OpenSolaris more than FreeBSD, because it maybe ran ZFS some months longer than BSD? I'm not convinced with this line of thinking, for one because i have quite some confidence in the FreeBSD community, second because i think (and know) many people tested it, and three because i'm using ZFS on FreeBSD for almost one and half years now.

Either way, if you want to follow the conservative route, you should avoid ZFS or any new technology when it comes to storage. For ultimate protection, a backup of your files should be on another type of filesystem, in another computer. But when talking about reliability we all immediately demand the highest standards for mission critical business computers. For a home user, the situation is obviously different and any investments have to be reasoned with clear benefits.