I will try to make this as simple as possible. I recently added a 3rd hard drive in my scsi set up and now i am running 3 Fujitsu MAU3147's in raid 0. I ran some benches using HD Tach and i wasnt very pleased by the results.
Here are my benches:
This is my set up with 2 drives in raid 0
This is my set up with 3 drives in raid 0
I have the impression that i have reached a bottleneck somewhere, i was expecting better results.
Here are my specs.
Mobo: Asus A8N-E
HDD:Fujitsu MAU3147( I got 3 of these)
LSI LOGIC MegaRAID 320-2E
Amphenol Ultra320 certified cables
Do you think that my results are ok? The reason i think that i have reached a bottleneck is because the burst speed is almost the same and the average read speed. I dont know where the limitation is, my controller is running on a pci-e 8x slot, all the settings of the controller seem fine and my cables are ultra320 certified.
Anyway i will gladly listen to your comments and any thougths/opinions
Thanks for your time
im really tired, and actually should be heading to bed real soon, so i dont know how much help ill be... but, ill agree with you that theres definetly a problem, and say, yeah... thats definetly a throughput bottleneck... cuz going by the 2 hdd raid 0 results, each hdd seems to be getting about 100 MB/s... and adding a third hdd in raid 0 should give you close to 300MB/s (assuming ~100 MB/s)... PCIe 8x should have a bandwidth of about 2100 MB/s... not 200...
i could say its maybe a configuration or setting problem, but i dont know many specifics about how you have things set up
could try a different set of controller or motherboard chipset drivers even...
you could also try a few different benchmarks maybe, cuz that one in particular might be just a fluke in regards to what results youre getting, which obviously arent where they should be
the raptor array i have for instance, scores just over 210MB/s in the burst speed rates for the same benchmark... so, definetly a problem for what you have then... ...honestly, i know i should be getting higher than that too, but as far as im aware, its specifically a performance issue with my onboard nvRAID controller
I am not really perverse on the PCIe interface. I have the same drives as you, but use an Adaptec 29320lpr card on a 133hz PCI-X bus. But what I was wondering is, from what I have read is that PCIe bus is shared with another interface, video I think. So maybe someone here that can maybe educate us both.
Well to be sure that the PCI-e bus will not be shared by any video card i installed my good old voodoo3 which is pci, and installed my scsi controller on the pci-e 16x slot. I wanted to make sure that the pci-e bus would not be shared.
I did some more benches using a different program but wit the same dissapointng results
How many drives are you running on your set up? Could you post any hd tach results?
I have done some tests a few years ago, my set up is four years old now. After the weekend I'll run some test to see what you I have compared to you. Again I am on a PCIx bus, ill test it on a Regular PCI just to gain a basic bench for you. I should be home Sunday and will try to remember to do so, I'd do it know, but my router must be down b/c I can RDP to it.
What is the throughput for those drives suppose to be? it is not 320MB/Sec see what you can find.
Try the following to troubleshoot your setup
Do the benchmarks on each drive separately.
Check and make sure your PCI-Express slot is running at least at x8
Can you turn your cache off?, if so do that and run the bench again
Can you borrow another cable, try it with a new cable
Try it in another Channel
Change positions on the cable
Get a flat ribbon cable and try it -- as opposed to the -braided- cable
From what i understand each drive is working properly and the cables/adaptors are fine. It is worth noting that the bust speed of every drive is about 190MB/s and in my opinion there is still a bottleneck. If you have a look at my 1st post, with 2 drives in raid0 the burst speed was about 190MB/s and with 3 drives in raid0, both burst and average read speeds were about 190MB/s.
Also, i moved the controller on my PCI-E 4x slot and found out that the bottleneck occured at 155MB/s. I then raised the PCI-E bus from 100 to 105MHz just to see if there is any difference and the bottleneck occured at 163MB/s
I dont understand this, i am sure PCI-E 4x is alot faster than 155MB/s and PCI-E 16x is way lot faster than 190MB/s. Could it really be bus limitation?
So far from what i can think, the problem might be:
1) a wrong setting which i cant figure out
2)Lousy LSI controller
3)Bus limitation(bad mobo maybe????)
I am planning getting a new mobo soon. I am thinking about the Asus A8n32 sli. I dont need sli but with the 2 pci-e slots running at full 16x lanes there shouldn't be any bus limitation for my xfx7950 and the scsi controller