RAID Scaling Charts, Part 1

pschmid

Distinguished
Dec 7, 2005
333
0
18,780
How do RAID arrays scale as you increase the number of hard drives they contain? Part 1 of our RAID Charts project shows all the benchmark results for RAID 0, RAID 1 and RAID 0 1 setups across two to eight disk drives.
 
I notice that at a queue level of one, The results of all configurations are approximately the same. As a single user in a desktop environment, I expect my queue depth to be close to one or two, never 64, giving me very little benefit from multiple devices. I would have liked to see in the study a base line of a single non-raided device of the type used in the study, as well as a single raptor150. I would also like to have seen a benchmark workload that would represent what a single user would be doing.
 

rammedstein

Distinguished
Jun 5, 2006
1,071
0
19,280
we have a baseline... it was the RAID 1 benchmark, RAID1 only performs as fast as a single drive...

PS, i noticed that an even number of drives in the RAID 0 configuration provided better performance than that of odd drives, anyone know any particular reason towards this? And also, just wondering, what was the stripe size, it can generally vary between 4K and 256K (higher stripe size for bigger sequential read/writes, lower stripe size for faster access times, 64K is usually a happy middle.)?
 

kamel5547

Distinguished
Jan 4, 2006
585
0
18,990
we have a baseline... it was the RAID 1 benchmark, RAID1 only performs as fast as a single drive...

No... RAID 1 performs slower than a single drive, at least as far as writes are concerned (not so sure about the effects on other tests). A baseline would have been a single drive, the test provided no baseline within the article that I could find.

I agree I would have liked to have seen the single drive results at a minimum to give some relativity to the results.
 

jt001

Distinguished
Dec 31, 2006
449
0
18,780
With the controller used I would say that writes would be slightly slower and the reads slightly higher, RAID1 speed is highly dependent on the controller used.

I too was looking for a single drive baseline :(
 

rammedstein

Distinguished
Jun 5, 2006
1,071
0
19,280
If (big if) the controller is doing it's proper job at its proper speed, it won't influence the speed and will run just as fast as a single drive, because the controller sends and receives the data from one drive and its cache as well as sending the same data to another "mirrored" drive, but instead of reading the data back, it just does some verifying, now, if the controller is working properly this should not effect performance, however, this is generally not the case, and the verifying takes a bit of the controllers time and slows the write speeds down a bit because it has to happen to both drives, one that is idle and one that is verifying, it doesn't effect reads though because that only happens to the idle drive, so reads seem faster but aren't really.
 

SomeJoe7777

Distinguished
Apr 14, 2006
1,081
0
19,280
JT001 is correct that RAID 1 speeds are highly controller dependent.

I've seen RAID 1 controllers that are slightly slower than a single drive on reads, and an appreciable amount slower than that on writes.

I've also seen high-end RAID 1 controllers from LSI that are equal to a single drive on writes, and faster than a single drive on reads. (This is possible because the controller intelligently interleaves reads to both drives, taking advantage of the fact that 2 copies of the data exist).

So you can't make blanket statements that RAID 1 is always slower because of this or that. The design of different controllers and their manner of operation translate into a wide variety of different RAID 1 performance levels.
 

badders

Distinguished
Jun 20, 2007
38
0
18,530
Raid 5 is the most popular raid level so why were there no raid 5 tests?

Ummm...

This is the first of two articles. We will discuss RAID 0, RAID 1 and RAID 0+1 here, while part 2 will deal with RAID 5 and RAID 6.

I agree That there should be a single drive benchmark, especially for the IO test profiles. This can't even be found in the HDD Charts, as only the HD501LJ and HD300LJ is listed. The HD321KJ is not, and the different model number would tell me that it's vital statistics would be different???
 

Felix1

Distinguished
Jul 2, 2007
1
0
18,510
I hope the final part of this article also delves into the statistical probabilities of data loss with each RAID array. That's a necessary part of the calculus of a cost - risk analysis. What's been presented so far is certainly interesting but I want to be able to decide how much statistical risk (of data loss) I'm assuming for each configuration...and then I can make a rational decision on how much I'm willing to spend.

Of course, you'll have to start with some assumptions about statistical hard drive failure rates and drive enclosure rates but as long as those are realistic and standardized across all calculations then you'll be able to give us relative indicators of failure probability...obviously not absolute predictions.
 

7oby

Distinguished
Dec 19, 2006
3
0
18,510
Raid 5 is the most popular raid level so why were there no raid 5 tests?

This 3 year old article will give you the RAID-5 part:
http://translate.google.com/translate?u=http%3A%2F%2Fhardware.thgweb.de%2F2004%2F06%2F25%2Fraid_5_im_visier_skalierungstests_mit_3_bis_8_laufwerken%2Findex.html&langpair=de%7Cen&hl=de&ie=UTF8

It's kind of ridiculous that with the "current" hardware the access time goes up to 30ms, while it did not with 8 drives in more computation intensive RAID-5 three years ago :lol: I would have thrown the controller away and gotten a faster one.

Since Patrick Schmid was also part of the testing crew that time, I'm wondering why he published this ...
 

joex444

Distinguished
i noticed that an even number of drives in the RAID 0 configuration provided better performance than that of odd drives, anyone know any particular reason towards this?

I don't think the test bed was properly configured for this observation you pointed out (which is NOT true for 3 drives, an odd number). 3 drives is always faster than 2 and less than 4.

Now, it is interesting that 7 drives is always slower than 6 and 8.

Then again, past 4 drives, you've reached the saturation of the controller. I guess the thing to learn is don't oversaturate your controller.
 

tostada

Distinguished
Jul 3, 2007
2
0
18,510
You missed one very important difference between RAID 0+1 and 1+0. Given the two, RAID 1+0 is always preferable because the rebuild time is significantly lower than with RAID 0+1.

In RAID 1+0 (stripe of mirrors), a failure of a disk results in only the mirrored-pair being degraded. When the disk is replaced, only the data on the mirrored pair must be rebuilt.

In a RAID 0+1 (mirror of stripes), a failure of a disk places the entire stripe in a degraded state. When the disk is replaced, the data against the entire stripe must be re-mirrored against the other functional stripe.

So, given the choice, it is always better to run RAID 1+0 over RAID 0+1.
 

kramik1

Distinguished
Dec 6, 2006
3
0
18,510
I am putting together a lab for developing in Linux\Unix and there aren't very many performance reviews for Raid controllers in Unix. I have heard of a lot of problems with adeptec and highpoint support on these platforms. 3ware is the most solid I have heard.

A good article review of Raid controllers in Unix/Linux with performance numbers and driver maturity would be very helpful.
 

cgaspar

Distinguished
Apr 18, 2006
5
0
18,510
As tostada said, you _always_ want to do 1+0. Fortunately, many controllers that _say_ they're doing 0+1 _really_ do 1+0. The only way to tell for sure is to intentionally fail a drive and then watch the blinky lights on a rebuild.

As someone else mentioned, RAID 1/1+0/0+1 on n disks allows for up to n times the read performance of one disk, and up to n/2 times the write performance. Sequential access will show the most benefit, as the latency increase won't matter.
 

adelatorre

Distinguished
Jul 5, 2007
1
0
18,510
I'd like to see the LSI Logic megaraid controllers in these benchmarks. They've had good support of all os's (windows, linux, netware, os2).
 

darklife41

Distinguished
May 18, 2006
201
0
18,680
Seems to me that this article would have been better had it been to highlight the onboard RAID configurations which most ppl will use on a home system, rather than a professional environment. I'd much rather have seen a comparison on a mainstream motherboard. There are significant differences, starting with the power rail.

Home users should never depend solely upon a RAID array for data or even the OS. A good backup system is priceless. Long live E-SATA! Any professional interested in saving themselves headaches would also have a backup system in place, moreso when using RAID arrays than when not. If/when the failures are added to the mix, it will become clear that RAID is not the most stable system. I can't count the number of times I've had to reinstall my OS or recover my data due to RAID failures and hard drive failures. Although images made it simpler, its still a major inconvenience.

I'm not sure why anyone thinks RAID 5 is the most commonly used form of RAID, as most people can't afford 4 or more hard drives and wouldn't know how to install/configure them. I'd think RAID 0 is the most common, followed by RAID1, and then RAID 5/10.

I sure hope RAID 10 will get a mention.

I've also found that odd numbers of hard drives do not perform as quickly as even numbers of hard drives with onboard RAID controllers. No idea why that is. But having never used a separte RAID controller, maybe its 3rd party RAID controllers that don't have that issue?

I don't see the point of comparing to a single Raptor unless the comparison was to Raptors in various arrays, and the article stated why they didn't use Raptors. I'd think the difference in speed would be the same % for any hard drives used.

Anyway.. another good article and well overdue in my opinion. RAID has never been covered in enough detail by the motherboard manufacturers, and isn't easily googled. :)
 

yawnbox

Distinguished
Aug 14, 2006
3
0
18,510
how would this review change with MLC and SLC SSDs?

similarly, what effects would multiple disks in various RAID arrays affect wear leveling and the life of the SSD?