Onboard Raid or Raid Controller supporting 2 Raid 0 setups

destro

Distinguished
May 14, 2004
41
0
18,530
Working on building a new system, but I need to create a dual raid 0 setup on either the MB or 1 controller card...Also, what are the advantages or disadvantages to onboard raid vs raid controller card? My thinking is a raid controller card will have more features and perform better, but are there any cards that support dual raid 0? Does create a Raid 0 array create more stress on the HD components or is the concern that my data is on 2 drives acting as 1 vs just using 1 drive for my data?
 

UncleDave

Distinguished
Jun 4, 2007
223
0
18,680


The consensus seems to be that cards are better. I've always used on board RAID and (touch wood) haven't had any problems yet. Using a card will also give you more flexibility if you need to change motherboard - you just move the card. SomeJoe is the best person to advise you on specific cards etc. You could do worse than spending some quality time with Google.....



What features do you want/expect/need?



Google is your friend..... I assume that you mean four drives in two separate RAID 0 configurations?



I don't believe that there is anymore stress. People will argue that there is more RISK, if either of the two drives fails you lose everything. Your data is only as good as your last backup!



That, my friend, is opening the proverbial can of monkeys (or is that a barrel of worms?), you will find too many debates on that subject on this board. My PERSONAL OPINION is to run RAID 1 for everything........

I hope this answers.

UD
 

Kursun

Distinguished
Jan 6, 2008
334
1
18,860
You may find my latest setup interesting.
2 pairs of 500 gb HD (4*500 gb HD)

First pair is configured as Intel Matrix Raid.
60 gb section (2*30gb) is Raid 0, only OS (Vista 64) + programs.
The rest (500-30=470 g) is Raid 1. This section is for backup of the second Raid set which is home for all documents, photographs and other data.

Second pair is Raid 1 and has documents, etc. It is partitioned to 3 sections. First part is normal daily documents which get changed most frequently. It is copied to the first Matrix Raid 1 section most frequently for backup purposes. Second partition has my photographs. It gets to be backed up less frequently. The third partition has my scanned film images and other data. This third partition has data that changes least and hence backed up least often. The third partition also has my last two Ghost backups of the boot disk (60 gb Raid 0).

Sounds complicated? No, the system is quite stable.
 


The first question to be asked is what you are going to do with the system. If it's a gaming system, RAID 0 has little advantage. RAID 0 will get you faster boot times, RAID 0 will get you faster access to lots of small files as in database applications. But, based upon the results at storagereview.com, RAID 0 gets you a performance increase of 1.9 % in gaming. Seems odd at first blush but except for database usage and boot times, two 10k Raptors in RAID 0 is actually slower than a single Seagate 7200.11 or Samsung F1.

As for reliability, two drives sharing data means that either one of them fails and you lose all. So your failure rate is twice as high. Seagate's last 7200.x series drive was in the 42nd percentile (meaning it proved to be more reliable than 42% of all drives surveyed) in the reliability survey at storagereview.com The last Raptor was at the 12th percentile while the two most recent WD drives finished in the 4th and 5th percentiles respectively. While you might think both are poor showings, it must be noted that the upper percentiles are dominated by a) older drives and b) server class SCSI drives.

So if you up in the 90's percentiles, maybe RAID 0 is not that bad a risk. But if you down below the 20th percentile already, RAID 0 can be a bit scary. Of course if you make adequate backups, all such a loss drive costs you is T & E. To my mind, putting your OS and programs is a god bet on a single HD or RAID 0 but I think data belongs on a RAID 1 or RAID 5 array and preferable on an NAS like this:

http://www.cdw.com/shop/products/default.aspx?edc=1421414

Cost for RAID box with one 500 GB drive is $399, add $115 or so for a 2nd matching drive.



 

hcforde

Distinguished
Feb 9, 2006
313
0
18,790
Matrix RAID is VERY good for being free to users of Intel chipsets with an (R) such as ICH7R, ICH8R, ICH9R. It is also tranportable
 

Kursun

Distinguished
Jan 6, 2008
334
1
18,860
I don't believe in statistics. To me Raid 0 is a lot faster than a single disk. I'm not interested in data loss risk either as I have my Ghost backup of my boot (OS and programs) on a Raid 1 set.
BTW as for failure rate, I don't see much difference between let's say 0.0001% and 0.0002% !
 
I don't believe in statistics. To me Raid 0 is a lot faster than a single disk. I'm not interested in data loss risk either as I have my Ghost backup of my boot (OS and programs) on a Raid 1 set.

Belief and reality are two different things. I might not believe in gravity but only Pastafarians can otherwise explain why I don't fly off the planet. http://en.wikipedia.org/wiki/Flying_Spaghetti_Monster

Ghost Backups are fine but they ain't free:

1. What is the TEC (time / effort / cost) associated with making that image say once every week ?
2. What is the TEC of losing the data between Froiday's image and Thursday's disk failure ?
3. What is the TEC associated with restoring that image ?
4. What is the TEC of having a replacement HD of same size & manufacturer on the shelf for immediate restoration ?
5. What is the TEC of a failure at midday while getting out that job that has to be delivered today ?
6. What is the TEC associated with waiting 3 -6 weeks for a warranty replacement and not having ready access to data ?

Does the time saved from faster boot times and a < 2 % performance increase offset the above ?

>BTW as for failure rate, I don't see much difference between let's say 0.0001% and 0.0002% !<

Rather than delving into a "belief system" again, let's take a dose of reality. Skidaddle over to storagereview.com and look up the WD RE2 500 (4th percentile) for example in their reliability survey. Out of 183 participants in the survey, 18 % lost a drive within 2 years; 16% lost a drive within one year. After 68 builds, drives have been replaced within the warranty periods on a bit less than half of them but over the years I have used mostly enterprise class drives....and I used auxillary cooling on all hard drives on all builds since 1996.

Now that 16% up there accounts to a failure of 1 in 6 after just one year. So, on average, if three people build an array using those drives, one in every three is rebuilding that array within 12 months. With one in six dying within a year, what are these numbers going to look like when the drives start approaching their three year warranties ?

Now looking at the Cheetah 15k.3 (5 year warranty) , the most reliable drive in the survey (100th percentile).....7 in 68 failed within 4 years. That is still 10%. With the very best drive ever on the market, that means one in every 5 people who built a RAID array using these drives will have had to rebuild that array with a replacement drive within 4 years.

That "real life" usage data puts the range at between one in three (33%) and one in five (20%) having to replace their HD's and rebuild their RAID 0 array within the drive's warranty period....one year with the WD's 3 year warranty and 4 years with Seagate's 5 year warranty. To my mind, that's a real far cry from 0.0001 or 0.0002 %.


 

Kursun

Distinguished
Jan 6, 2008
334
1
18,860
The most reliable HD had failure rate of %10?
Nonsense. I've had over 30 HDs. Not one of them failed. They were just retired because of inadequate capacity. Don't believe everything you read on the internet.
 


Nonsense ? From the storage review site:

"23,343 readers have entered results for their experiences with a total of 50,276 drives" .

I've had over 30 HDs. Not one of them failed. They were just retired because of inadequate capacity.

Ya might as well say that ya bought over 30 quarts of milk and none of them went bad. They just disappeared cause you drank them (before they went bad).

Don't believe everything you read on the internet.

You apparently missed the part where it mirrors my own experience in 68 builds, over 120 hard drives(or about 4 times your data set). I don't retire drives, I use them as backups. You aslo skipepd what's been written in the many other threads here on the forum

Proper research requires checking numerous sources and getting confirmation. I've done that. To get some perspective, you might read some industry research papers which were published and appeared in peer reviewed trade journals. Let's start here:

http://labs.google.com/papers/disk_failures.html
Failure Trends in a Large Disk Drive Population
Eduardo Pinheiro, Wolf-Dietrich Weber and Luiz Andr´e Barroso
Google Inc.
1600 Amphitheatre Pkwy
Mountain View, CA 94043

Now I don't think you will find many outfits that purchase more HD's than the peeps who run Google's server farms.

Take note of page 4 which shows AFR (Average failure rates) as follows:

3 month AFR 3%
6 month AFR 2%
1 year AFR 2 %
1 year cumulative (7%)
2 year AFR 8%
2 year cumulative (15%)
3 year AFR 9%
3 year cumulative (24%)
4 year AFR 6 %
4 year cumulative (30 %)
5 year AFR 7 %
5 year cumulative (37%)

You will see the same thing here:
http://www.cs.cmu.edu/~bianca/fast07.pdf

As is summarized here:
http://www.allthingsdistributed.com/2007/02/on_the_reliability_of_hard_dis.html

"Both papers report disk failure rates in the 6%-10% range: in a datacenter with about 100,000 disks you will need to replace up to between 6,000 and 10,000 disks per year."

Again, that is per year ! So after 1 year, you are replacing 6 -10%, by the 2nd year, you have replaced 12-20%, 3rd year 18-30% ..... 5 years 30-50%.

Let's look at some print mags now:

http://www.pcworld.com/article/id,129558/article.html

"Customers replace disk drives 15 times more often than drive vendors estimate, according to a study by Carnegie Mellon University.

The Carnegie Mellon study examined large production systems, including high-performance computing sites and Internet services sites running SCSI, FC and SATA drives. The data sheets for those drives listed MTBF between 1 million to 1.5 million hours, which the study said should mean annual failure rates "of at most 0.88%." However, the study showed typical annual replacement rates"

So, on one hand, we have storagereview's survey with 50,000+ drives, we have google's published report with several 100,000 drives, and the Carnegie Melon report with 100,000+ drives....we have the "all things distributed" article, the PC world article...... and all of these mirror my own experience ....and then on the other hand,we have your "30 retired before they failed drives". In light of the preceding, your nonsense theory doesn't seem to have much support.
 

Kursun

Distinguished
Jan 6, 2008
334
1
18,860
You do write long. Unfortunately I don't have that much time.
One can read endlessly about the subject. But actual experience is a lot important than endless figures of statistics read on the internet.
By the way you have misunderstood my term of retirement of HDs. By retirement I mean retiring to backup role, later being moved to another PC, and at last being used as an usb external. I still have 1-4 gb HDs still perfectly working. I think you should read less and get more experience.
 


Again, you should read more closely....using the math I learned, 120 was bigger than 30. The 120 drives in my machines pales to the studies but they are documented.....and did appear in peer reviewed industry journals. Don't try and compare random rants in forums by 14 year olds, 6 months past their 1st build with industry professionals who have to survive scrutiny from educated and experienced peers. There's an organization with a 12 step program for people who don't believe in statistics. It's called Gambler's Anonymous.

My experiences covers 68 builds from over 22 years and over 120 hard drives....and that doesn't count NAS's. I have a machine behind me that's been running 24/7 since the late 1990's. When you can match that experience, we'll talk about experience.

Anectdotal evidence is by no means useful in predicting the anticipate life of components.....no more than saying "Cigarette's aren't harmful because your grandpa smoked 2 packs of Camel cigarettes a day till he died at 85".

"Hey I smoked 30 cigarettes when I was 16 and I am not dead so peeps saying cigarettes are harmful are full of nonsense".

>I still have 1-4 gb HDs still perfectly working<

I have a1996 laptop with a 5 GB drive that is working perfectly. I boot it up about 4 times a year. That hardly constitutes 24/7 or even average operation.
 

SomeJoe7777

Distinguished
Apr 14, 2006
1,081
0
19,280


Sorry, dude, but Jack is right. Your 30 hard drives is a drop in the bucket compared to the tens of thousands of hard drives that Google, Carnegie Melon, and StorageReview.com have had experience with.

If you knew anything about statistics, you would know that a sampling of 30 hard drives can't tell you squat about the actual reliability level.

The only conclusion you can draw from your 30 hard drives that haven't failed is that you've been lucky. Plain and simple.
 

destro

Distinguished
May 14, 2004
41
0
18,530
The first question to be asked is what you are going to do with the system.



The purpose of the hard drives is for audio production:

First array of hard drives, is a audio sample drives. Basicly an audio database that I need to stream audio from in a multi track enviroment. When I write the drums for a a particular piece, I start out by giving each drum sound its own audio track. That can be up to 20 different audio files simultaneously streaming from my hard drive via my sampler program into the audio sequencing program each file playing in its own seperate own channel, giving control over each individual sound and thats just the drums.

Second array of hard drives, is a audio track drive. The streamed audio gets converted to single audio tracks on this drive. Audio tracks can be very large, so large capacity is required, it also needs fast access times and read times, for the audio sequencer to stream the files off the hard drive. Since these are wave files they can be extremely large. This drive doesn't have any data saved to it, all audio is moved off the drive once the project is complete. I can't backup in the middle of a jam session, usually the interupts the streaming of the audio. So this drive is always at risk. But the first array of drive will require a backup plan since samples are stored on it. But, it must have max access and read times as well, so raid 0 and an external backup plan is what I have in store for the first array unless someone has a better raid solution.
 


1) RAID 0 setups make the system feel more robust and alive
2) Data Loss from failure isn't like what people here are saying, its easier to get a virus or accidentally delete your data
3)External caddys are your best friend when they have a sync setup ;)
4) Intel MATRIX arrays are the best setups