Sign in with
Sign up | Sign in
Your question

Twin Raptor RAID0 Array on the slow side?

Tags:
Last response: in Storage
Share
May 21, 2007 4:17:54 PM

Hey,

I suspect my RAID0 array is a lot slower than it should be. Here's the specs:

2x Western Digital Raptor 150GB 10,000RMP drives (WDC WD1500ADFD-00NLR1) on a ICH7 south bridge configured to RAID0 with 128KB strip size.

I get on IO benchmarkers a score of around 30MB per sec read speed, am I right in thinking it should be closer to 5-8x that(dependant on head location on the disks)?

I only noticed this as a result of looking at (I know it's 4 drives rather than 2):
http://images.tomshardware.com/2007/05/21/intel_intros_...

If my array is being slow, what would you guys recommend to get it performing as it should(I have only required processes running at startup, the latest Intel Storage Manager, and keep the drives defraged regularly)?


Cheers
Steve


System Specs for ref:

CPU: Intel Core 2 Duo E6600 Conroe @ 2.4GHz
RAM: 2x Cosair CM2X1024-6400 CAS5 (2Gb total)
GPU: HIS ATI Radeon X1900XTX 512Mb Ice 3
Motherboard: DFI Infinity 975X/G
Hard Drives: 2x Western Digital Raptor 150Gb in RAID0
PSU: Tagan TG500-U25 500W
Audio: M-Audio Firewire Audiophile
DVD-RW Drive: Sony DRU-820A
Case: CoolerMaster Mystique in black
Heat Sink: Zalman CNPS9500-LED using Arctic Silver 5 thermal compound
Display: Sony SMD-E96D
Mouse: Logitech G5 Gaming Grade Laser Mouse
Tablet: Wacom Intuos A5
Keyboard: Enermax Aurora Aluminium in black
Wireless: Netgear WN311B 270Mbps
Fans: CoolMaster Blue LED 120mm and Akasa Amber Silent 120mm mounted with Acousti Ultrasoft Arrowhead fan mounts.
OS: Windows XP Professional SP2
May 21, 2007 7:31:46 PM

30 MB/s is slow for a single modern drive, let alone two Raptors in RAID 0.

How are you measuring performance? What are the test options/parameters if any?

How full is the partition which is being tested? If there are multiple partitions / matrix arrays, where is the test partition?

Have you monitored drive activity for any processes which might be using the disks at the same time as the test? Does the drive activity light seem to be on when you don't expect it? Have you scanned for malware?
May 21, 2007 7:45:51 PM

Quote:
30 MB/s is slow for a single modern drive, let alone two Raptors in RAID 0.

How are you measuring performance? What are the test options/parameters if any?


Just the "All in one" test on Iometer and the harddrive tests (XP startup etc) on PCMark05.

Quote:
How full is the partition which is being tested? If there are multiple partitions / matrix arrays, where is the test partition?


One partition for the whole array, it's 116GB free out of 279GB.

Quote:
Have you monitored drive activity for any processes which might be using the disks at the same time as the test? Does the drive activity light seem to be on when you don't expect it? Have you scanned for malware?


I don't have much idle reading/writing going on, and the stuff that is is just the firewall processes. No malware, virus or spyware.

I'm going to try some of the other Iometer tests to see what they come up with and report back.
Related resources
May 21, 2007 8:07:23 PM

IOMeter "all in one" is going to have smaller than typical access sizes, esp. from the perspective of an optimized RAID access pattern. I'd use a minimum of 64k access sizes, and also test higher access sizes.

To get clean results, you should use a very large test size, I'd suggest 10 GB test file. The idea here is to test for long enough and with enough data (more than system RAM) so that the file system cache doesn't corrupt the performance figure too much.

10 GB is about 20000000 sectors. Shut down IOMeter, delete the iometer test file, restart IOMeter, change the size option, and then it should create a test file of the desired size. Note that this process will take a bit of time (but not too long), moreover the location of the file will be determined by the file system state, and earlier files would give somewhat better performance than later ones due to the way the drives are laid out.

Or you could just try HDTach -- this will give quick and (kinda) dirty read performance measurement. The plus here is that it avoids the file system, so isn't affected by the amount of data you have on it, etc., and shows you something about the performance across the drive. HDTach uses 64k accesses. HDTach isn't the best option for RAID arrays, as its access pattern can interact with the stripe size and give somewhat wonky results at times, but sometimes it's fine, and often it will tell you something useful.
May 21, 2007 8:09:38 PM

I've just had a play with Iometer, managed to get results of average read of 70MB per sec. I'm still thinking that's a little slow tho...
May 21, 2007 8:14:02 PM

A few threads down, someone was reporting issues with the Intel ICH RAIDs with 128K stripe sizes, specifically that the read performance is not what it's supposed to be.

Try a default 64K stripe size.
May 21, 2007 8:33:20 PM

Reality is much more complex than theory, but here's a bit of theory in any case which might help:

If you're requesting 64k of data at a time, with no read-ahead/etc,. and your stripe size is that big or larger, then each request will only go to one drive at a time, and then your performance will look like single drive performance instead of RAID 0 performance.

This is why smaller stripe sizes are sometimes recommended for sequential access performance.

Again, reality is much more complex than this simple theory, so other issues come into play including driver implementation, file system read-ahead, application access optimization, etc. Moreover, there's much more to file system performance than simple sequential access.

70 MB/s is slow for 2 Raptors.

E.g. with 2x old Maxtor SATA 1.5 Gb/s in RAID 0 with 32k stripe size and an nForce 3 RAID implementation, I get 110 MB/s read. (~ 105 MiB/s; IOMeter reports in MiB/s.)

Here are a couple of HDTach graphs. Top is 2 x Seagate 7200.10 drives, bottom is 2 x Maxtor 6L300S0. 32k stripe. They only approach 70 MB/s at around the minimum.

May 21, 2007 8:49:20 PM

Another couple of suggestions:

Check that caching is enabled at the RAID level. (Yes, it's write caching, yes, this is a bit desperate, but try it the other way in any case; sometimes settings stack/interact.)

Get hold of WD utilities and check that the drive features / settings are normal. If drive-level read-ahead was somehow disabled for example, you'd get very poor read performance.
May 21, 2007 10:25:54 PM

Here is the result from HDTach:



In responce to SomeJoe7777's comment, I was planning a reformat after the end of term at Uni, so I can update the BIOS (Windows won't boot properly if I update it as is, apparently DFI say this is because the update has major changes to the RAM section) and I may well try changing the strip size then. I was thinking of maybe getting 2 more Raptors and going to a RAID5 array.

Write caching is enabled.

Unfortunately the WD software can't cope with the RAID array to check settings(although on my external WD drive it tells me very little), and I don't have a spare drive in halls to hand to install Windows on purely to check, so that's a dead end too.

I'll have a play with stoping any programs starting with windows and see if that brings anything up. I'm gunna fiddle with the BIOS settings too.

Edit:

I don't know whether this is helpful in anyway, but here's the result from my external WD MyBook (On USB not Firewire):




Another update!
Very odd stuff guys, I tried the Iometer test which I got 70MB per sec on to 100% write rather than 100% read, and I got 110MB per sec and the latency halved to 0.3 from 0.6. I though read performance was spose to beat write lol.

I think SomeJoe7777 suggestion is looking very real after that...
May 21, 2007 11:30:29 PM

Those are fairly decent HDTach numbers -- well beyond the original 30 MB/s and 70 MB/s. You might be able to do somewhat better with a smaller stripe size, but note that this is a purely synthetic performance test with limited scope. I.e. while you can tweak your synthetic numbers to be better, this does not necessarily mean that your actual applications will be faster. In fact, the opposite might be true -- there are other access patterns besides sequential read/write, and for some of them, a large stripe size can be better.

Moreover, the Raptors strength are not so much in sequential access but in random access. If sequential access was all there was to it, then we'd all save our money and go with cheap drives in RAID 0, and be ahead in $/MB/s. Both matter of course, but IMO the random access is a bigger differentiator for Raptors than STR.

To make this more concrete, if you're playing with stripe sizes, and want to make the best choice for your applications, I think you should invent a benchmark based on your own application (e.g. x game load / level load / boot time, whatever). This would take both factors into consideration, and not sway you into tweaking a synthetic benchmark potentially to the detriment of your actual application.

OTOH, tweaking synthetic benchmarks can be fun too :) 
May 21, 2007 11:40:05 PM

Quote:
A few threads down, someone was reporting issues with the Intel ICH RAIDs with 128K stripe sizes, specifically that the read performance is not what it's supposed to be.

Try a default 64K stripe size.


There could be something to this -- I've reported a significant issue with RAID 5 writes with 128k stripe size in ICH8R (ICH8DO). However, I haven't noticed a significant issue with read performance in my simple testing.

Would you link the thread please?
May 21, 2007 11:59:03 PM

My apologies, the thread I was thinking of was the one you posted your write performance issues in, and I misread your graphs. 8O

Nevertheless, the strange HDTach graph (note the modulation-looking variance of the transfer rate) indicates that something may not be right. Although the numbers there look better than 30MB/sec.

To the OP, if there's a BIOS update to your motherboard that's major enough to require a Windows reinstall, I'd say maybe the discussion is moot. If DFI changed that much of the system BIOS and the memory layout, all of these results could be irrelevant.

Update the system BIOS and shift to a 64K stripe size and see if the problem disappears.
!