Sign in with
Sign up | Sign in
Your question

benefits of raid level 5?

Last response: in Systems
Share
August 20, 2006 8:03:46 PM

I recently set up a new rig with raid 0, but one of my friends has been telling me this is a bad idea. He says that I should pick up a third HD and use raid level 5 since it offers some amount of parity. What would be the main benefits in getting another HD and setting up raid 5?

Also, I have a M2N32-SLI deluxe which comes with two different raid controllers, one by Nvidia and one by Silicon Image. Is there any reason why I should use one over the other, or will they both perform about the same?

More about : benefits raid level

August 20, 2006 8:29:18 PM

If your computer is gaming ONLY then RAID 0 is fine, if you have files that you want to keep, or need your computer to have uninterrupted operaton (work for instance) then RAID 5 is a better choice. The best choice though is probably 2 fast drives in RAID 0, with your progrmas and OS on them, and one large drive for files, media and the like.

As for your second question, the only real difference is that the Sillicon Inmage definitely won't be usable with installing drivers, while the nVidia controller should be supported by the motherboard.
August 20, 2006 8:34:55 PM

RAID 0 is not the best way to get faster performance because if for some reason, any reason the array breaks down you lose everything, so you have to install windows again and all your games and you've lost any other valuable information
Related resources
August 20, 2006 9:34:15 PM

If you have a raid controller that supports RAID 5, then it's just barely slower than RAID 0 and it has redundancy as well, which RAID 0 has none of. If you have anything that would suck losing, then get another hard drive so you can back up any important files or you can make a RAID 5 array, although it's more suited for smaller server environments. If you don't have a hardware RAID controller though, then you'll have to have a software RAID, and this makes RAID 5 really slow.
August 20, 2006 9:58:53 PM

You need a cheap raid 5 controller that does xor calculations on-board for good raid 5, try this (it doesn't have onboard cache but it's still fast):

http://www.newegg.com/Product/Product.asp?Item=N82E16816115026

RAID 0 in my opinion is just as "breakable" as a single disk. If your one data drive dies you loose everything, if one of the raid 0 drives dies you loose everything, same thing either way. And the likely hood of doubling your chances is bull. Keep the drives cool and don't jar them and they will easily last the 5 years they're made to. Most easily last 10 if taken care of.

I use RAID 0 on all my "personal" machines (4 of 'em) RAID 1 or 10 on my servers (4 of 'em) and RAID5 on my two terabyte data servers. Out of the many RAID 0 arrays I've built, I've had maybe one fail, and I don't remember it so maybe never (benefit of the doubt here.) If you have the cash do a RAID10 and have the best of both worlds, you loose the capacity of two drives, but it will be fast and redundant.

And NVRAID is faster than the SI raid, check toms they have some reviews somewhere showing it, can't remember where.
August 20, 2006 11:38:50 PM

Quote:
... What would be the main benefits in getting another HD and setting up raid 5?
As others have said, it'll slow things down somewhat, but if any of the 3 or more drives in a RAID 5 fails, you'll still have all your data intact.
Chances of a modern drive failing are pretty low, as others have said. Also, consider that trying to add a 3rd drive to a pre-existing RAID 0 array may be very tricky (I don't know).

Quote:
Also, I have a M2N32-SLI deluxe which comes with two different raid controllers, one by Nvidia and one by Silicon Image. Is there any reason why I should use one over the other, or will they both perform about the same?

The SiI controller on your MB only has 2 SATA ports, so you can't do RAID 5 with it. Also, since more people use the nVidia components, their drivers may have fewer bugs in them.
August 21, 2006 8:26:42 AM

Don't do raid 5, do raid 0+1

a wee bit expensive though, since you need 4 identical disks, but it makes things fast AND fully redundant.

Raid 0 is a risk because if either disk goes, the data on the other is lost aswell, however, as has been mentioned, disks don't break very often. DOA's have happened to me twice, but I've *never* had a disk just go sour on me in mid operation. I've also not seen this in the past few years professionally where a server disk crashed. However, if you value your data, go with either raid 0+1 (MASTAH!) or go with raid 1 only, less expensive (2 disks), but you lose capacity and it's slow...
August 21, 2006 9:44:31 AM

Quote:
RAID 0 in my opinion is just as "breakable" as a single disk

Not exactly... it's two time more exposed to failures, because there are 2 drives: 2 drives means 2 possibility that one can fail.
Absolutely no office or enterprise system uses RAID0, the only used in professional systems are RAID1, RAID5 and the last RAID50
August 21, 2006 4:46:11 PM

The odds aren't linear. Take into account MTBF and actual failed drive percentages, and it's way less than double the possible failure rate. Someone on these boards did some math on this, can't remember who. But the likely hood of a significant failure is so small it's almost pointless. And there are companies that use RAID0, video editing workstations for example. A single config can easily be backed up and if the array fails, just reimage it. They don't store data on the workstations if they do it right. And the speed really helps.
August 21, 2006 4:55:11 PM

anyone know what the CPU overhead is with most onboard SATA controllers that support RAID 5/0+1/10. I've used RAID 1 setups before, but with my latest rig I'm seriously considering adding two more WD RE 2500YS drives, but am wondering if I shouldn't also consider getting a RAID controller card to go with it (for a game server).
August 21, 2006 5:32:31 PM

Quote:

RAID 0 in my opinion is just as "breakable" as a single disk.


RAID 0 is twice as breakable in my mind because if one of the disks go down, then you've lost the data on both drives =)
August 21, 2006 5:34:20 PM

My NVRAID with a RAID0 is between 3-9%

My SI was around 11-15%

My rocketraid 2320 with RAID5 is around 1-5%

All off Athlon 64 3700+'s
August 21, 2006 5:35:07 PM

True, but if your ONE data drives dies, same thing, gotta think outa the box :) 
August 21, 2006 6:06:55 PM

Well MTBF is perhaps one of the most mis-understood concepts in the field. Its ONLY use is in compairing general drive quality ie a higher MTBF generally means a better drive. It has very little value in determining when a drive will fail. Basically a manufactuer takes X number of drives and tests them for Y hours so when a drive fails the MTBF = X times Y. As with all probabilities sample size is crucial and most companies do not do a good job due to time constraints. So if a drive had a MTBF of 100000 hrs one drive out of 1000 will fail in 100 hrs. Some drive lines like the Deskstar (Deathstar) had very high rates of failure. I personally have replaced over a dozen of them in the last 4 years. Although the technology has improved greatly today's drives have higher spin rates, generate more heat and have reached the maximum threshold for data density (I'm talking about the most currently used drives) so data loss is just as high if not higher. In RAID 0 if a drive fails all is lost, the more drives in the array the higher the chance one will fail. Weather you NEED RAID redundancy is another matter, a good removable drive and periodic backups may be all you need
August 21, 2006 6:10:37 PM

Good point, I guess I'll just have to let my experiences talk on this matter. I still have 10 and 20 GB WD drives in some RAID0's that have run for over 6 years 24/7. Deathstars and some maxtors were general exceptions to the rule. Most drives will last "forever" if treated right. The small risk of a drive failing doesn't outway the speed increase IMO. If the data is that important you'd back it up anyway wouldn't you? RAID1 and 5 aren't replacements for backups.
August 21, 2006 7:48:23 PM

I too have several drives that have lasted over 6 years 24/7. I seem to see a bit less life in some of the higher density drives but like you said a good enviorment is important. If a system is moved around for lan parties or is crammed full with poor airflow or full of dust bunnies the size of baseballs then drive life will be short indeed. I once found a mouse nest in a system after the mouse chewed through the 50 pin SCSI cable.
August 22, 2006 2:03:01 AM

I've found mice, bats (don't ask), some very large spiders, the tip of a finger (some china man that built the case is hurtin'), and worst off all, a bit of cat shit, a baby cat got into the open side of a case, appearantly the owner didn't notice when putting the side back on.

Talk about off subject :) 
!