Hello all! This is my first post! Yey me! I did read up on RAID 5 in the forum, and didn't find the answers I am looking for.
I built a Windows Server 2003 rig out of my old main desktop machine. I've updated it via windows update to SP2 and more, updated the onboard SATA BIOS, and have gotten the RAID 5 container setup. Hardware specs:
Asus A7n8x-e mobo
AMD Athlon XP 2800+ (yeah, it's a 32 bit proc)
2 GB RAM, dual channel
Promise SATAII150TX4 4 port SATA/RAID controller
5x WD Green 1TB drives (2 on onboard, 3 on add-in card)
ATI 9800 pro 256 mb RAM
PC Power and Cooling 500 watt power supply
4x 80mm case fans (2 in 2 out)
Zalman cooler for CPU and another one for the graphics card
misc other stuff like DVD burner, fan controller, etc..
Boot drive is PATA 200 GB (60 and 130 GB partitions)
I picked software RAID so that in the unlikely event that the controller hardware took a dump, I could recover the data. Originally I was going to use Linux, but then I found out Windows could do it, so I used the MSDN AA access my school gives me to get Windows Server 2003 going.
I got the RAID container initialized yesterday around this time (10ish pm EST), and allowed it to format. I never saw a %age like I normally do on a format, but it's my first time with dynamic disks and I was using VNC so I let it slide. I estimated that it would take ~12 - 15 hours (~3 hours is what it took to format one of the drives by themself). Here is is 24 hours alter and I'm at 26% done. Questions follow:
1. Is that normal? Seems a little excessive to me. Processor usage is 20 - 30 %, page usage is minimal. I don't remember what the RAID 5 array in my Dell power edge 2400ish (maybe 2450 or 2500) server took to format (343 GB), but it wasn't that crazy. It does have a server frame and a server RAID controller though.
2. For 1 - 5 users, is that processor gonna have enough ass to handle the parity calculations? I have a dual Athlon system I'd like to migrate the array too as soon as I can it back up and running. It's really just me and the primary purpose is to store all that data for live access, with backups taking care of the really important stuff. Unfortunately, the dualie isnt running and won't boot, so it's out of the picture for now.
3. To make up for the paltry write speeds (I saw that I can expect 20 - 30 mb/s), can I use another drive as a buffer? If I wanted to place files on the RAID array, is there someway I can automate the process such that I dump them in a special folder on one drive and then the OS moves them to the array automagically, at its own pace? What about a whole machine as an intermediary? Or does that get into SAN?
4. The 3x drives on the promise add-in card all show up as removable in the safely remove program of windows. I found this program that seems to fit the bill of redefining the environment such that my problem goes away: http://safelyremove.com/index.html If it works I'll use it all over the place, and then in this case hide those drives. My concern is that when I go to eject a USB stick I accidentally kill my RAID 5 array. Does windows ever let go of the array? Would it be possible for me to bork it like that? It's too close for comfort for me for 4 TB of data.
5. What are some good back up software solutions? I own multiple sites and will use those for off site backups, but I don't know where to look for software. BTW free is much appreciated, open source is cool too.
6. is encrypting anything on the array a potential for disaster? Not the whole drive, but maybe just some folders. Using either the built in encryption, or a 3rd party app like truecrypt.
7. I have named the server Alveolate Hollow, but that doesn't jell well. Any better ideas? I think I'll start naming my machines from Ancient Greek mythology, or as some sort of word function based on what they do (Alveolate Hollow didn't turn out as cool as I had hoped though).
Comments, suggestions, and answers are welcome!
In the ~30 min it took to write this up, I am now at 27% done formatting. lmao...
Without answering all your questions, let me just tell you that RAID5 is complex, and the poor quality windows-based implementations just don't cut it both in terms of reliability and in terms of performance.
It's unclear to me if you use the Promise FastTrak "FakeRAID" drivers, or the "hacked" Windows software RAID5 drivers. But both do an extremely poor job, and i would recommend that you look at alternative solutions now that you still can.
Maybe you would like to read through this topic, where someone is doing a similar project and i gave information about running a NAS with the ZFS filesystem which offers a superior implementation which greatly overclasses anything you can find on Windows.
If you want RAID5 on Windows, the only real options are:
Intel ICHxR RAID5 drivers with 'write caching' enabled (mediocre; danger of corruption)
Hardware RAID with Areca-class (no advanced protection; no corruption prevention; risk of broken arrays)
If you want a true sexy solution that is 100% free, look at the most advanced storage technology available today: FreeBSD or OpenSolaris offer native kernel-level implementations of the ZFS filesystem; the most advanced filesystem to date that incorporates an advanced RAID engine capable of RAID5 and RAID6 with features that are impossible (yes, impossible) to implement in conventional RAID engines.
I can only hope i can save you from poor RAID5 implementations; as its a waste for your nice WD Green disks. However, since you got a good GPU in your system i would assume its not a server but multi-purpose workstation/gaming/server PC?
If you go the ZFS route, you would want a 64-bit CPU though and onboard SATA ports, and avoid anything that uses the obsolete PCI interface. So this would mean you need to buy additional hardware, for example:
AMD dualcore ($50)
AMD Socket AM2+ mobo with 6 onboard SATA ($75)
Minimum of 4GB DDR2/DDR3 DRAM ($40 - $60)
Additional casing and power supply ($?)
Going the ZFS route probably means your machine needs to be dediacted to storage though; so a true NAS without the ability to game or use it as a workstation.
I'll gladly answer your questions, but i feel you should think about this first.
I've been poking around with my Windows server to see if I can get it going, and no go. Microsoft wasn't any help, and when I contacted the MSDN AA admin at my school, he said move to Linux. I LMAO'd on that one, as well as the other techs at my office.
So I've started looking into FreeNAS, which is an implementation of FreeBSD. I'm concerned about drivers though. I think I can find driver for my mobo SATA and add-on SATA controllers, but I am not sure if that would work with FreeNas because FreeBSD is NOT Linux (as I have seen all over their forums).
As far as buying more hardware, the whole point was to make use of an old box. (BTW I wasn't planning on using it as a workstation, even with the 9800 in there) I have something like 30 old PCs (more accurately enough aprts for that many), so i'd like to turn them into appliances.
So I've still been working on this, and have come to the conclusion that I must buy hardware. I am looking for specific motherboard recommendations for FreeNAS. See my article on the FreeNAS forum here for a full writeup:
Do you want to use FreeNAS with UFS filesystem, or the new ZFS filesystem? ZFS requires a special setup, like 64-bit CPU and at least 2GB RAM. If you will not use ZFS but regular RAID; you can do with much less on an older 32-bit system.
Windows drivers work only on windows of course; they are useless for FreeNAS. Fortunately, Linux and FreeBSD have many drivers integrated in the kernel/operating system, so it either works without any installation; or it doesn't work because its not supported. So you don't need to pay much attention to drivers.
If you use the onboard SATA connectors, you would need to set it to AHCI in the BIOS; and not to RAID-mode since you use software RAID instead. Please make sure both the disks and the (gigabit) network are NOT on a PCI-bus. If they are; do not complain about performance! If you want performance, you will avoid PCI at any cost. PCI-express is fine though.