Sign in with
Sign up | Sign in
Your question

Need your opinion on my scsi raid set up

  • NAS / RAID
  • SCSI
  • Hard Drives
  • Storage
Last response: in Storage
August 11, 2006 8:19:59 AM

Hi everybody
I will try to make this as simple as possible. I recently added a 3rd hard drive in my scsi set up and now i am running 3 Fujitsu MAU3147's in raid 0. I ran some benches using HD Tach and i wasnt very pleased by the results.
Here are my benches:
This is my set up with 2 drives in raid 0

This is my set up with 3 drives in raid 0

I have the impression that i have reached a bottleneck somewhere, i was expecting better results.
Here are my specs.
CPU:Athlon FX55
Mobo: Asus A8N-E
HDD:Fujitsu MAU3147( I got 3 of these)
Amphenol Ultra320 certified cables

Do you think that my results are ok? The reason i think that i have reached a bottleneck is because the burst speed is almost the same and the average read speed. I dont know where the limitation is, my controller is running on a pci-e 8x slot, all the settings of the controller seem fine and my cables are ultra320 certified.

Anyway i will gladly listen to your comments and any thougths/opinions
Thanks for your time

More about : opinion scsi raid set

August 11, 2006 10:17:55 AM

im really tired, and actually should be heading to bed real soon, so i dont know how much help ill be... but, ill agree with you that theres definetly a problem, and say, yeah... thats definetly a throughput bottleneck... cuz going by the 2 hdd raid 0 results, each hdd seems to be getting about 100 MB/s... and adding a third hdd in raid 0 should give you close to 300MB/s (assuming ~100 MB/s)... PCIe 8x should have a bandwidth of about 2100 MB/s... not 200...

i could say its maybe a configuration or setting problem, but i dont know many specifics about how you have things set up

could try a different set of controller or motherboard chipset drivers even...

you could also try a few different benchmarks maybe, cuz that one in particular might be just a fluke in regards to what results youre getting, which obviously arent where they should be

the raptor array i have for instance, scores just over 210MB/s in the burst speed rates for the same benchmark... so, definetly a problem for what you have then... ...honestly, i know i should be getting higher than that too, but as far as im aware, its specifically a performance issue with my onboard nvRAID controller
August 11, 2006 1:11:56 PM

I am not really perverse on the PCIe interface. I have the same drives as you, but use an Adaptec 29320lpr card on a 133hz PCI-X bus. But what I was wondering is, from what I have read is that PCIe bus is shared with another interface, video I think. So maybe someone here that can maybe educate us both.
Related resources
August 11, 2006 6:46:05 PM

Well to be sure that the PCI-e bus will not be shared by any video card i installed my good old voodoo3 which is pci, and installed my scsi controller on the pci-e 16x slot. I wanted to make sure that the pci-e bus would not be shared.
I did some more benches using a different program but wit the same dissapointng results

How many drives are you running on your set up? Could you post any hd tach results?
August 11, 2006 6:53:46 PM

I have done some tests a few years ago, my set up is four years old now. After the weekend I'll run some test to see what you I have compared to you. Again I am on a PCIx bus, ill test it on a Regular PCI just to gain a basic bench for you. I should be home Sunday and will try to remember to do so, I'd do it know, but my router must be down b/c I can RDP to it.
August 13, 2006 9:54:29 AM

Thanks for the tip but windows wont let me change anything under policies tab

August 13, 2006 11:01:02 AM

I have the latest bios and drivers allready
August 13, 2006 11:37:00 AM

Run IOZone under windows and save the results.

Boot Linux ( ideally FC5 x86_64 ) and run IOZone then compare the two sets of results.

GL :-D
August 13, 2006 2:10:54 PM

Hmm u think linux will make a difference? How do you run this IO bench?
August 13, 2006 2:34:29 PM

You have bottlenecked the SCSI bus, in real world use the U320 connection probably gives only about 250MB/s (too all disks on the same bus), so the results you have are in line with expectations.

You would be better SATA disks, more headroom for large arrays.
August 13, 2006 11:18:56 PM

I wouldnt say that i'm bottlenecking the scsi bus. Even if the actual troughtput of the U320 is about 250MB/s i am still at 180 which a lot lower.
August 17, 2006 7:43:53 AM

What is the throughput for those drives suppose to be? it is not 320MB/Sec see what you can find.

Try the following to troubleshoot your setup
Do the benchmarks on each drive separately.
Check and make sure your PCI-Express slot is running at least at x8
Can you turn your cache off?, if so do that and run the bench again
Can you borrow another cable, try it with a new cable
Try it in another Channel
Change positions on the cable
Get a flat ribbon cable and try it -- as opposed to the -braided- cable
August 20, 2006 12:52:37 PM

I have just tested each drive by its self




From what i understand each drive is working properly and the cables/adaptors are fine. It is worth noting that the bust speed of every drive is about 190MB/s and in my opinion there is still a bottleneck. If you have a look at my 1st post, with 2 drives in raid0 the burst speed was about 190MB/s and with 3 drives in raid0, both burst and average read speeds were about 190MB/s.
Also, i moved the controller on my PCI-E 4x slot and found out that the bottleneck occured at 155MB/s. I then raised the PCI-E bus from 100 to 105MHz just to see if there is any difference and the bottleneck occured at 163MB/s
I dont understand this, i am sure PCI-E 4x is alot faster than 155MB/s and PCI-E 16x is way lot faster than 190MB/s. Could it really be bus limitation?

So far from what i can think, the problem might be:
1) a wrong setting which i cant figure out
2)Lousy LSI controller
3)Bus limitation(bad mobo maybe????)
August 22, 2006 2:16:47 PM

I am planning getting a new mobo soon. I am thinking about the Asus A8n32 sli. I dont need sli but with the 2 pci-e slots running at full 16x lanes there shouldn't be any bus limitation for my xfx7950 and the scsi controller