Flyingpolok

Distinguished
Apr 9, 2007
21
0
18,510
Hey all, first post here hope it goes well.

I've finally got the cash and high priority top secret security clearance from my wifey to spend on a new PC. I'm pretty excited because I will be able to play all these new fancy games using "high" settings, something I've NEVER been able to do. Since graduating college and starting my IT career, getting married and buying a house kinda killed my free time and money I let my personal passion for geek'ing out PC's take a back burner. Don't get me wrong, I geek out PC's/servers/networks at work everyday but it ain't the same when it's for someone else.

Anyways on to the system:

After careful research on all the components this is (basically) what I've chosen.

Core 2 DUO E6600
Nvidia 8800 640 MB DDR3
4 GB OCZ DDR2-800
XFX Nvidia SLI 680i MB
700w Modular P/S
Vista Ultimate

4 x WD 160 GB SATA 3 gb/s <-- here's where my problem lies.


I chose 4 Disks because I wanted a RAID configuration either RAID 5 or Raid 0+1. I've personally used a RAID 0, RAID 1, RAID 5 and RAID 10 however since these were mostly on clients servers I couldn't really "play" with them.

I'm having the worst time trying to decide weather or not I want 480GB in a raid 5 OR 320 in a RAID 0+1.

I like almost having half a TB in data storage it's comforting. Though I'm sure at some point I will do large writes totally killing the performance in the Raid 5 making wish I had a raid 0+1.

Is 320GB really enough for 4-5 years of use with this PC or at some point will I have to back the data UP break the array and add new disks?

What do you guys/gals think?
 

Wylkell

Distinguished
Aug 15, 2006
11
0
18,510
what raid card are you going to be using? RAID 10 is sort of the ultimate solution but if you dont have a high end raid card, i dont think you're going to see the improvements you'd normally see on your (probably scsi-based) servers everyday.
 

occdavid

Distinguished
Jan 5, 2007
132
0
18,680
But not factoring a controller - his question still stands.. RAID 5 or RAID 0+1?? But first we need to decide if your talking about 0+1 or 1+0.
RAID 0, then RAID 1: Divide the four disks into two sets of two. Turn each set into a RAID 0 array containing two disks, then mirror the two arrays. (Sometimes called a "mirror of stripes".)
RAID 1, then RAID 0: Divide the four disks into two sets of two. Turn each set into a RAID 1 array, then stripe across the two mirrored sets. (A "stripe of mirrors").
Generally speaking RAID 10 is better than RAID 0+1.
I will assume you are talking about using the 680i motherboards ability to do raid 5 or 10... in which case - I would go Raid 10.
On a side note - if budget is a concern (as it must be if you had to get approval from the boss (wife)) why not cut back a little in another area say start with 2GB of ram - add 2 more later after you save up and go with larger HD's? Say 320's - the egg has the seagate ST3320620NS for 90.00. What's that, $40-45 more than the 160's you had picked out. $160-180 more $'s to come up with.
 

Flyingpolok

Distinguished
Apr 9, 2007
21
0
18,510
It was raid 0+1. Something interesting about the 680i SouthBridge is that it's throughput for even just a RAID 0 maxes out @ 110mbps where as Intel's 8 series chipsets can hit 300. There's an article on this somewhere on toms hardware.

Basically to get the most bang for my buck I chose the RAID 0+1. I already had the hardware ordered so no turning back.


One other side note. I ordered Vista Ultimate 32 Bit w/ 4GB of RAM.. The system only sees 2.8GB because of 32 bit limitations, DOH!. Only reason I went with 32 bit was because I thought software compatability (games) would be an issue.
 

belvdr

Distinguished
Mar 26, 2006
380
0
18,780
Generally speaking RAID 10 is better than RAID 0+1.

Why would you think that? The speed to mirror is reduced if a single drive is the target versus a RAID 0 array. That's one of the reasons RAID 10 is found on many cheaper controllers (like the integrated Smart Array controllers on HP, such as the 5i and 6i) versus RAID 01 (which can be found on the EVAs and such).
 

occdavid

Distinguished
Jan 5, 2007
132
0
18,680
It has to do with redundancy. Raid 10 provides the ability to have more failed hard drives than Raid 0+1. Of course we're talking more than 4 hard drives here. Say 10 drives - Raid 10 would be 5 sets of 2 drives - you can have 5 drives fail at one time as long as they are in the different sets of 5. With Raid 0+1 you would have 2 sets of 5 drives in raid 0, so if any drive fails in set A you lose that array (you can have 5 drives fail but they have to be in the same array). But you can still run on set B. But if you lose a drive in set A and set B (only 2 drives) at the same time - Kiss it all goodbye. :cry:
 

cpburns

Distinguished
Aug 28, 2006
239
0
18,680
my thoughts are to get an Areca raid controller. see if you can find one for PCI Express. They have built-in hardware XOR engines. it would run you about 300-600 dollars, but it would also give you throughputs of around 200-400MB/s in raid 5. you can even add in drives as needed and rebuild the array to accomodate the larger size, and you can have a hot spare in case one drive goes down. i figure i'm preaching to the choir since you're already in IT, but with an Areca card, you can achieve excellent data bandwidth in raid 5 and have great features too.
 

occdavid

Distinguished
Jan 5, 2007
132
0
18,680
I re-read your post and was confused by "The speed to mirror is reduced if a single drive is the target versus a Raid 0 Array"

You will not see a performace increase in Raid 0+1 over Raid 10. They both perform very well. Raid 10 has an advantage in fault tolerance and rebuild performance.
Link
 

belvdr

Distinguished
Mar 26, 2006
380
0
18,780
I re-read your post and was confused by "The speed to mirror is reduced if a single drive is the target versus a Raid 0 Array"

You will not see a performace increase in Raid 0+1 over Raid 10. They both perform very well. Raid 10 has an advantage in fault tolerance and rebuild performance.
Link

You'll mostly see this during rebuilding as your article states. Additionally, in your example of 10 drives, either way, if you lose two drives that are in the same array (whether it is 10 or 01), you lose all data. It's a matter of luck I suppose.

EDIT: After reading further, RAID 10 stands.
 

TeraMedia

Distinguished
Jan 26, 2006
904
1
18,990
If you go with RAID 5 with write-back-cache enabled, you will likely get reasonable write performance, but your array will need to rebuild any time your system has an abnormal shutdown (including just holding the power button). This can be annoying if you have these from time-to-time, as rebuilds will take a few hours, even with your moderate disk size.

If you go with RAID 5 with wb cache disabled, you will get very slow write performance. Don't take this lightly; any application that uses temporary disk space will run slowly, and your pagefile will run slowly unless you can use that new Vista feature to put it on a USB key (readyboost is it? Not sure OTTOMH).

Please check the Toms charts for disk drive performance. Some of the bigger drives provide better performance than the smaller, older ones as a result of more platters and more bits per sq in. You might be better off buying one or two larger disks now in say RAID 1 (if you need the redundancy), and adding a RAID 0 later if you want more for high performance.
 

belvdr

Distinguished
Mar 26, 2006
380
0
18,780
If you go with RAID 5 with write-back-cache enabled, you will likely get reasonable write performance, but your array will need to rebuild any time your system has an abnormal shutdown (including just holding the power button).

Not true. Rebuilding is at the physical drive layer. An abnormal shutdown, at most, will affect the logical filesystem (i.e. file corruption) if you have write-back cache enabled without a battery backup, but will not affect the physical array unless a drive failure or something similar occurs. Some array controllers will disable the accelerator functionality if the battery fails.
 

TeraMedia

Distinguished
Jan 26, 2006
904
1
18,990
I'm going by my own experience with Intel's brand of onboard RAID 5. If an abnormal shutdown occurs with WBC enabled on the array, the array will always rebuild on restart, even if the machine was sitting idle at the time of shutdown. If I turn off WBC, the array will only rebuild if it was in the process of writing to disk at the time of shutdown.

I have no experience with the 680i onboard RAID 5 support, but I can only believe that it is similar in behavior as it does not appear to have a battery-backed WBC. So if a shutdown occurs midway through a write, it is unlikely that all drives in the array (including the parity drive for a given slice of data) will have finished updating.

Perhaps the Intel and NV drivers work differently w.r.t. WBC config and behavior. Not having the hardware, I couldn't tell you. But regardless, I believe it is worth a fair warning to the OP to be aware of this and consider it's implications in his design. I am not speaking to add-in card drive controllers; with those I would definitely recommend something with battery backup and WBC capabilities to mitigate this type of problem. But that appears to be outside of the budget of the OP.