Sign-in / Sign-up
Your question

Poor X38/ICH9R RAID 0 performance, GA-X38-DQ6

Tags:
  • Motherboards
  • NAS / RAID
  • Performance
Last response: in Motherboards
September 27, 2007 5:29:37 PM

Well I'm pretty impressed with my GA-X38-DQ6 so far, but I don't seem to be getting very impressive RAID performance.

System:

Q6600 @3.6GHz (400*9)
6GB RAM (2*2GB+2*1GB), 800MHz
Vista Ultimate x64
8800GTX

Hard disks are:

5*WD Caviar 750GB SATA

Of which:

400GiB (430GB or so) RAID0 array at the start of the disks for performance 128KB clusters.
2.41TiB RAID5 array at the end of the disks for storage. 64KB clusters.

The RAID 5 array has now completed its initialisation (which as it is on the same disks was slowing performance of the RAID 0 array), and my HDtach results for the RAID 0 array are:



This seems pretty poor really. I was previously getting over 100MB/sec at the start of the disk with a single one of these drives!

Any ideas, short of just buying a PCIe RAID controller?

More about : poor x38 ich9r raid performance x38 dq6

September 28, 2007 10:00:19 AM

i don't think it's the chipset...
the ich8 & ich9 give a very nice pref in raid....

can it be the drivres? it's a fresh new board... this is y i go this way...
try to wait for a driver update...
a b V Motherboard
September 28, 2007 1:03:24 PM

It reads to me as though you created one large array with all 5 physical hard drives and then partitioned it into two logical disks with one partition being RAID0 and the other partition a RAID5. The fact that each array is not separated onto their own physical drives, would be the reason and cause of your "poor" performance. Essentially, each array is fighting for resources and and fighting each other when accessing each partition.

My first thought is, why would you put create two logical disks and then create the arrays across the same physical hard rives? That really doesn't make sense. Usually, two physical drives are put into a RAID0 and then a minimum of 3 physical drives are put into a RAID5. If that's what you did, create one large array with all 5 physical drives and then create two logical paritions for each array, you may want to put 2*750 physical drives into the RAID0 and then the other 3*750 physical drives into the RAID5 array.
Related resources
September 28, 2007 2:47:49 PM

OK, 5 drives, all 750GB, unpartitioned, unformatted.

Set the Intel SATA controller to RAID mode in the bios.

Hit Ctrl-I on boot to get to the ICH9R Setup.

Create a RAID 0 array across all 5 drives, but rather than setting it to 3750GB, set it to 400GB.

This means the first 80GB or so of each disk is now in a RAID 0 array.

Create a RAID 5 array across all 5 drives in the remaining space.

Install Vista Ultimate x86 onto the RAID 0 400GB array, with raid drivers from the disk.

Wait for initialisation of the RAID 5 array to complete, as this will obviously interfere with benchmark results from both disks.

Update to latest Intel Chipset .inf and Matrix Raid drivers.

Update BIOS to latest version.

Computer now sees two drives in device manager, a 400GB drive and a 2.41TB drive.




I don't plan to be copying etc files from one array to the other, or even accessing them both at once, so the fact they are contained on the same disks shouldn't matter. Even when the RAID 5 array has its drive letter unassigned in Computer Management, meaning no part of Windows is able to access it, the benchmark results for the RAID 0 array (which is then effectively the only array on the disks) do not improve.

I don't want 2*750GB in RAID 0, for a start that would not be as fast as my current 5 disk RAID 0 array should be, it would also be 1.5TB and far too big for my system drive.

Anyone have any experience of ICH9R (or even ICH8R) not performing as well as expected?
September 28, 2007 2:59:54 PM

Its amazing the difference Write-Back cache makes!



And yes, I have a UPS...
a b V Motherboard
September 28, 2007 3:41:02 PM

Ok, so you did create two logical drives and span the arrays across all 5 physical drives.

Given that you're using the ICHR9 and it is effectively software RAID, the arrays are fighting for resources to calculate parity and using the same channels for I/O. I theory I agree it should not matter. But the practice is obviously not giving you the results you want or expect. I equate setting up your arrays like that to putting a CD-ROM as the slave and a hard drive as the master on the same IDE ribbon cable. In theory there is enough bandwidth to allow traffic to readily flow between the two but the pratice always provides less than average results.

Even though you say you won't be copying files between the arrays, I would imagine that would be almost impossible not to. I would also imagine that you are going to have partitions within each array to organize your files and data. If that's the case, then copying data/files from one partition on the RAID5 to another partition on the RAID5 is going to result in less than expected performance as well; let alone copying files from a parition on the RAID0 to a parition on the RAID5. Even with a dedicated controller card, I would think you'd have the same issues as using the ICHR9, the only difference would that you are off-loading the parity and I/O to the card's resources rather than your system resources.

Again, typically speaking arrays are dedicated to physical drives and although you may not want to do that it seems to me it would provide the best overall results.

All other comments aside, what are you going to do if a physical drive fails? You've lost both arrays and not just one.

I dunno, either way, good luck!
September 28, 2007 4:38:51 PM

Well parity isn't an issue with RAID 0, so while the RAID 5 array is not assigned a drive letter that doesn't come into it.

I have a quad core CPU at 3.6GHz, which I'm sure can spare a core or two for parity when doing other things.

Neither of the arays are further partitioned, the system drive holds windows, apps, games etc, the storage drive mostly holds vast amounts of TV shows, movies, game images, etc, so read/write speed is not really an issue anyway - even if it writes and reads at 20mb/s on the RAID 5 array, that's enough.

Anyway, write back cache gave me 434MB/s average reads, as you can see from the post above yours, so I'm happy now :)