Need your opinion on my scsi raid set up

Hi everybody
I will try to make this as simple as possible. I recently added a 3rd hard drive in my scsi set up and now i am running 3 Fujitsu MAU3147's in raid 0. I ran some benches using HD Tach and i wasnt very pleased by the results.
Here are my benches:
This is my set up with 2 drives in raid 0
http://f10.putfile.com/thumb/8/22008550289.jpg

This is my set up with 3 drives in raid 0
http://f10.putfile.com/thumb/8/22118570410.jpg

I have the impression that i have reached a bottleneck somewhere, i was expecting better results.
Here are my specs.
CPU:Athlon FX55
Mobo: Asus A8N-E
HDD:Fujitsu MAU3147( I got 3 of these)
LSI LOGIC MegaRAID 320-2E
Amphenol Ultra320 certified cables

Do you think that my results are ok? The reason i think that i have reached a bottleneck is because the burst speed is almost the same and the average read speed. I dont know where the limitation is, my controller is running on a pci-e 8x slot, all the settings of the controller seem fine and my cables are ultra320 certified.

Anyway i will gladly listen to your comments and any thougths/opinions
Thanks for your time
15 answers Last reply
More about need opinion scsi raid
  1. im really tired, and actually should be heading to bed real soon, so i dont know how much help ill be... but, ill agree with you that theres definetly a problem, and say, yeah... thats definetly a throughput bottleneck... cuz going by the 2 hdd raid 0 results, each hdd seems to be getting about 100 MB/s... and adding a third hdd in raid 0 should give you close to 300MB/s (assuming ~100 MB/s)... PCIe 8x should have a bandwidth of about 2100 MB/s... not 200...

    i could say its maybe a configuration or setting problem, but i dont know many specifics about how you have things set up

    could try a different set of controller or motherboard chipset drivers even...

    you could also try a few different benchmarks maybe, cuz that one in particular might be just a fluke in regards to what results youre getting, which obviously arent where they should be

    the raptor array i have for instance, scores just over 210MB/s in the burst speed rates for the same benchmark... so, definetly a problem for what you have then... ...honestly, i know i should be getting higher than that too, but as far as im aware, its specifically a performance issue with my onboard nvRAID controller
  2. I am not really perverse on the PCIe interface. I have the same drives as you, but use an Adaptec 29320lpr card on a 133hz PCI-X bus. But what I was wondering is, from what I have read is that PCIe bus is shared with another interface, video I think. So maybe someone here that can maybe educate us both.
  3. Well to be sure that the PCI-e bus will not be shared by any video card i installed my good old voodoo3 which is pci, and installed my scsi controller on the pci-e 16x slot. I wanted to make sure that the pci-e bus would not be shared.
    I did some more benches using a different program but wit the same dissapointng results
    http://f10.putfile.com/thumb/8/22214393360.jpg

    @PCcashCow:
    How many drives are you running on your set up? Could you post any hd tach results?
  4. I have done some tests a few years ago, my set up is four years old now. After the weekend I'll run some test to see what you I have compared to you. Again I am on a PCIx bus, ill test it on a Regular PCI just to gain a basic bench for you. I should be home Sunday and will try to remember to do so, I'd do it know, but my router must be down b/c I can RDP to it.
  5. Do you know if there are any jumper settings on the drive which might be limiting me to U170 instead of U320?

    I was looking at this table but i cant find something
    http://www.fel.fujitsu.com/home/v3__product.asp?pid=398&inf=cfg&wg=0
  6. Thanks for the tip but windows wont let me change anything under policies tab

    http://f10.putfile.com/thumb/8/22405520083.jpg
  7. I have the latest bios and drivers allready
  8. Run IOZone under windows and save the results.

    Boot Linux ( ideally FC5 x86_64 ) and run IOZone then compare the two sets of results.

    GL :-D
  9. Hmm u think linux will make a difference? How do you run this IO bench?
  10. You have bottlenecked the SCSI bus, in real world use the U320 connection probably gives only about 250MB/s (too all disks on the same bus), so the results you have are in line with expectations.

    You would be better SATA disks, more headroom for large arrays.
  11. http://www.iozone.org/

    win32:

    http://www.iozone.org/src/current/IozoneSetup.zip

    Linux source:

    http://www.iozone.org/src/current/iozone3_263.tar

    tar xvf iozone3_263.tar

    cd iozone3_263/src/current

    make linux-AMD64

    ./iozone -h # for help

    ./iozone # add appropriate options -- same as the settings you used on windows
  12. I wouldnt say that i'm bottlenecking the scsi bus. Even if the actual troughtput of the U320 is about 250MB/s i am still at 180 which a lot lower.
  13. What is the throughput for those drives suppose to be? it is not 320MB/Sec see what you can find.


    Try the following to troubleshoot your setup
    Do the benchmarks on each drive separately.
    Check and make sure your PCI-Express slot is running at least at x8
    Can you turn your cache off?, if so do that and run the bench again
    Can you borrow another cable, try it with a new cable
    Try it in another Channel
    Change positions on the cable
    Get a flat ribbon cable and try it -- as opposed to the -braided- cable
  14. I have just tested each drive by its self

    MAU3147NP
    http://f10.putfile.com/thumb/8/23106325173.jpg

    MAU3147NC
    http://f10.putfile.com/thumb/8/23106332245.jpg

    MAU3147NC
    http://f10.putfile.com/thumb/8/2310633479.jpg

    From what i understand each drive is working properly and the cables/adaptors are fine. It is worth noting that the bust speed of every drive is about 190MB/s and in my opinion there is still a bottleneck. If you have a look at my 1st post, with 2 drives in raid0 the burst speed was about 190MB/s and with 3 drives in raid0, both burst and average read speeds were about 190MB/s.
    Also, i moved the controller on my PCI-E 4x slot and found out that the bottleneck occured at 155MB/s. I then raised the PCI-E bus from 100 to 105MHz just to see if there is any difference and the bottleneck occured at 163MB/s
    I dont understand this, i am sure PCI-E 4x is alot faster than 155MB/s and PCI-E 16x is way lot faster than 190MB/s. Could it really be bus limitation?

    So far from what i can think, the problem might be:
    1) a wrong setting which i cant figure out
    2)Lousy LSI controller
    3)Bus limitation(bad mobo maybe????)
  15. I am planning getting a new mobo soon. I am thinking about the Asus A8n32 sli. I dont need sli but with the 2 pci-e slots running at full 16x lanes there shouldn't be any bus limitation for my xfx7950 and the scsi controller
Ask a new question

Read More

NAS / RAID SCSI Hard Drives Storage