I just installed a 4 drive Software RAID5 setup using a 4 port SYBA SATA card and using the Software Raid in Windows Server 2003.
1) If one of the drive's fail, how do I match up the drive that Windows is reporting to the physical drive? Will drive 3 in windows match up to drive 3 on the syba card?
2) If one of the drives does fail, and I accidentally unplugged one of the remaining GOOD drives, am I completely out of luck (since it would take it down to 2 drives), or will plugging it back in work?
It would be easier if there were LED's on the drives because I would look for the drive that has less (or no) LED activity.
This a good question all users should ask and test to learn for them selves. In most cases it should tell you which one. If you have a CP that was used for setup this should show which one dropped out. Some of the systems I work on only use a drive ID. Does not name the drives 1-4 or 0-3. But has indicators on the front panel as to which drive.
And yes to loosing 2 drives. If you loose 2 drives in a 3 disk array you SOL.
I would recommend physically removing a drive stimulating a drive failure so you will know what to expect. Some systems will reconise the drive as part of the set and rejoin the array. Most will require a resync to rebuild the array. I'm in the process of simulating multiple failure modes so I will know what to expect if I loose a drive in my Snap4500. I do not like surprises. Most raid5 systems allow you to access the array in degrade mode. So it should be able to backup the data if you are not doing a schedule backup. As with all computers systems install a UPS to protect it against power failures. Most all allow the to shut down equipment during a extended power loss.
During testing fail 1 then remove a 2nd one to see it all is lost. If you drives are not hot swapable it may allow you to reinstall and not loose your array. Set up a test procedure that cover all of the ways for things to fail, including power failures with writes be made to the array. Preferably at the max speed your unit can handle. As with most if you get a bad superblock and it can not find a good one you are looking at a major recovery. Test you disaster recovery procedures and see if they work, otherwise it's just theory.
My 4500 takes 5 hrs to resysc my 4x400gig array. Thats thats the only major hangup about doing testing. On mine if install a clean drive it auto picks it up and start the process. If I reinsert what I removed it will mount but not resync. I have to manually tell it to repair.
I like know what I have to do before hand. If I put a spare in my Snap10 expansion unit it will auto use it if there was a failure in my array.
Why did you elect to use 3 drives instead of 4 like most?
I think the SW raid are easier to move around than hardware ones. Make a image of your OS HD and trash it an see what happens. So to answer your question yes. But you need to test it with your own setup.
The reason I like NAS are totally independent of the OS.