I have built recently built a new workstation for my motion graphic needs. Here are the specs:
Tyan S5396 | 2x Xeon E5430 Quad-Core | 16 Gig FB-DDR2 667 | GeForce 7950 GT KO | Coolermaster Cosmos 1000 | Antec TruePower 1KW | Vista Ultimate 64-bit
The main HD is a western digital SATA-II 500GB 7200rpm, and my data drives are 2 western digital SATA-II 500GB 7200rpm which are stripped in Vista (software raid 0). My ideal config is raid 0+1 not a software raid, unfortunately I didn't receive all my HDs at the same time and had to build and use the machine without having a raid 0+1 but just a mere software raid.
Now that my project is almost done and my 2 other HDs have came in the mail, I would like to rectify that. My motherboard has an integrated 6 SATA-II port controller (6321ESB ?) and a LSI 1068E 8 ports SAS/SATA controller.
Which controller would you suggest I use to setup my data drives for the raid 0+1, now that I have 4 identical WD 500GB SATA-II drives?
Does the controller have to specifically mention it supports Raid 0+1 to support it? for example the LSI controller supports "Raid 0, 1, 5, 10 in windows" What does "in windows" mean?
The case I'm using has a eSATA port, can I connect that directly to one of SATA port in the motherboard or eSATA is specific port?
I apologize for the newbish question, I've tried to look for answers on my own and what I found was too vague... I need definite answers if you would.
Im not going to pick on you but why do you want a RAID 0+1 and not a RAID 10? Im not saying they are the same thing I was just wondering if you know why you want 0+1? Most hardware controllers do not support this type of "nesting".
RAID 0+1 is actually very popular, much more than RAID 10 is today. I assume he wants the performance of RAID 0 plus the data redundancy of RAID 1. If you have a hardware controller that can support RAID, then there will be a setup mode you can enter during bootup. Generally you press an F# key to access it. If you see the RAID 0+1 option in there, I suggest setting it up with at least 128K blocks.
This will, of course, erase all data on the drive you installed before you received the other 2 drives. Theres really no good way around losing that data.
One thing to consider is also RAID 5. It provides a much more data security by allowing any of the 4 drives to fail while still preserving your data. You would also gain an additional 500GB of space, as opposed to losing 1TB of space with the RAID 0+1 setup. RAID 5, however, will not give you the performance benefit that you're looking for in the striping setup.
Cirdecus RAID 0+1 is a nested array meaning you take a set of stripes and mirror them. This will produce 1 volume (unless you use some sort of carving). It is VERY rare and I do not know of a hardware controller that uses this setup. RAID 10 is an option in almost all controllers which is also the preformance of RAID 0 with redundancy of RAID 1. I have about 50 controllers in my test lab. Also Remember RAID 10 is generally faster than RAID 5 although the overhead is high.
Also there are really easy ways to keep the data when he is migrating from what he has now to a new array. I image arrays all the time, its no big deal.
I do want the performance of stripped drives since I'm working with HD format video files. I read another article comparing the different RAID modes, RAID 0+1 seems to fit exactly what I need. From what I read RAID 10 is the same thing as 0+1 but done the other way... is it better for specific data demands?
I have an external 1 TB storage so backing up the data while setting up the array is not an issue.
Can anyone give me an answer regarding the eSATA port?
RAID 0+1 and RAID 10 are both nested RAID levels (i.e. a RAID of RAIDs).
RAID 0+1 is common in inexpensive, motherboard- and software-based controllers. With n drives (n must be even), n/2 drives are used to create a RAID 0 stripe set. The other n/2 drives are then used to create another RAID 0 stripe set. Then, the first RAID 0 is mirrored onto the second RAID 0.
RAID 10 is the mode more commonly used in mid- and high-end RAID controllers. The nesting is reversed from RAID 0+1. With n drives (n must be even), pairs of drives are used to create n/2 RAID 1 mirror sets. Then, a RAID 0 stripe set is created from all the RAID 1s.
Both setups are quite fast (most of the time faster than RAID 5, but the newer high-end RAID controllers have closed this gap considerably, and many can now do RAID 5 very fast as well). RAID 0+1 and 10 are inefficient with storage space. With n drives, you get the equivalent of n/2 drives' worth of storage space.
The difference between RAID 0+1 and RAID 10 is how they perform when a drive dies. In a RAID 0+1, when one drive dies it takes out the entire RAID 0 that it is participating in, leaving only the other RAID 0 of the mirror set operating. Loss of any other operating drive (all of which are in the other RAID 0 set) will kill the array. In RAID 10, loss of a drive only degrades the RAID 1 mirror it's participating in, leaving the only single point of failure as the one other drive in that mirror set. Losses of drives in other mirror sets can be tolerated without losing data.
Further, when a bad drive is replaced in a RAID 0+1, all of the drives in the entire array must participate in the rebuild (because one RAID 0 has to mirror to the other RAID 0). Since a rebuild operation stresses drives, you now have all the drives in the array simultaneously under stress, which increases the probability of another failure. When a drive is replaced in a RAID 10, only the 2 drives of the mirror set are involved in the rebuild, the other mirror sets are unaffected.
RAID 10 is recognized to be a superior nested RAID setup to RAID 0+1 for these reasons. Be very careful about what manufacturers' claim -- there are some manufacturers out there that say their controller does RAID 10 when in fact it's actually doing RAID 0+1.
On your eSATA question, I would avoid using one of the adapter plates that plug into the motherboard's internal SATA ports and give you an eSATA port. The cable length on the adapter plate can put the total cable length to the eSATA device out of spec, and there is no way to know for sure that the internal SATA connector that the adapter plate plugs into is eSATA compliant in terms of voltage thresholds, unless your motherboard manual specifically says so.