Linux software raid with mb and controller

expream

Honorable
Sep 8, 2012
4
0
10,510
I have some problems with disk performance. I have 6 x WD 500Gb RE4 disks. Each disk gives 135Mb/sec throughput. All measurements are made with hdparm with options "-tT" (I know that it is just synthetic test, but I need some start point to make measurements).

I have controller with Sil3124 x 4 ports PCI Express 1x

So...

1) RAID0 on controller with 2 disks gives 200Mb/s - ok, pcie limit.
2) RAID0 on motherboard with 2 disks gives 270Mb/s - niceeee :)
3) RAID0 on motherbord with 4 disks = 520Mb/s very niceeee :)
4) RAID0 on contorller with 4 disks gives 200Mb/s - ok, pcie limit.
5) RAID0 on controller with 4 disks + 1 disks on motherboard = 340Mb/s ... :(
6) RAID0 on controller with 4 disks + 2 disks on motherboard = 300Mb/s .... why? Any ideas? Maybe need more cpu power?



Now there is Pentium D Dual core 2.8Ghz, 4Gb RAM. It is dedicated box for storage.. no other activity.
 
That CPU might be slow enough to be contributing to your problem. You could try having two RAID systems, one on the motherboard controller and one on the PCIe card, and use software RAID within Linux to merge the two arrays. At the least, you might find out the bottle-neck in your system.

I'm not a RAID expert, so if someone else says something like the CPU is not too slow for this or anything else, then you might want to take their word over mine. Considering that it's a mere Netburst CPU (even with two cores and a 2.8GHz clock frequency), it simply wouldn't be a surprise if it is too slow.

Regardless of the reasons for your bottle-necks, a newer computer would undoubtedly help them IMO.
 

expream

Honorable
Sep 8, 2012
4
0
10,510
Why? Could you please explain? 2 disks RAID0 on PCIe x1 gives 200Mb (full speed of x1) ... but when I merge motherboard ports with pcie ... there is speed drop...
 
That controller is weak and it's on a weak bus to the CPU/chipset. Try doing four drives on the motherboard and two on the controller. However, keep in mind that the CPU is weak and might have trouble with well-performing RAID setups that rely on more CPU performance. It doesn't explain why four drives on the controller with two on the motherboard perform worse than four on the controller with one on the motherboard as far as I can tell, but like I said, try four on the board and two on the controller to see if there is a difference.
 

FireWire2

Distinguished
Back and Forth between two controllers cost lots of over head..
Switching bus it is not simple as you imaging - It requires may extra clock cycles, during these clock cycle there is NO DATA movie. Especial with uneven PCIe lanes buses

4x HDD in MB you got 520MB/s, but add 1 more drive from slow PCIe x1 cost you 180MB/s reduction... That tells you the penalty of switching unven PCIe buses

It is similar to copy a 2TB with 10's GB files versus 2TB of 10's MB files.

 


Re-read the first post of the thread. OP hasn't done what you said OP is doing. OP didn't do any tests with four drives on the motherboard and one drive on the PCIe controller.
 

FireWire2

Distinguished


OPPPSS - You're right! Darn it

No clue now !
 

expream

Honorable
Sep 8, 2012
4
0
10,510
4 Hdd's on motherboard + 1 hdd on controller gives total about 540Mb/s
4 Hdd's on motherboard + 2 hdd on controller gives total about 560Mb/s

SO you think it's just because of PCIe x 1? Not enough throughput?
 
That controller is definitely a factor. However, by putting the majority of drives on the motherboard, you seem to at least have better performance than four on the motherboard while still being able to use all six drives. Are you happy with your current performance now, or do you think that you should try to get more? I think that 560MB/s (assuming that it is MB/s that you're measuring; is it MB/s or Mb/s that you're measuring?) should be good for six hard drives.

As I said earlier, I'm not an expert on this, but IIRC, there is a higher amount of overhead with more drives in a single RAID array, so even if you had a better controller, you might see it struggle to break past 600MB/s or 700MB/s. You could try using different stripe sizes to see if a particular size is better for what you're doing; maybe it would help your performance if you're not already suing the best stripe size for your work.