is my onboard RAID killing system throughput?

Here is the setup...

supermicro x7sba motherboard
2 gb ecc memory
xeon 3065 dual core
pioneer dvd+-rw (sata)
promise fastrax tx4310 raid controller
supermicro case sc743 w/hotswap bay
dual GB lan on GB network
win server 2k3 R2 sp1 - all updates

i am running the the following HDD arrays, all use sata seagate drives

onboard controller (ich9r)
3 x 160gb drives in RAID 5 (boot)
2 x 750gb drive in RAID 1 (mirrored)

promise controller
3 x 250gb drives in RAID 5

This is what is happening. I use the server as a file/print server using server 2003 R2. I just built this system and the throughput is horrible. When I plug in an external usb drive to transfer files from my old server, it chokes. It take 30-40min to transfer a few gigs, maybe ~10-15gb. I had to quite because it was so terribly slow, and if I transfer in windows, the file transfer time fluctuation greatly... it will read 24 minutes, then 60, then 40, then 75. It takes forever, way too slow.

I also automate backups from my two workstations to this system nightly. both the server and workstations have GB nics and I use a GB switch. I was transferring about 8 gigs and it was taking 26-30 HOURS. My old setup could transfer about 75 gigs of data, over a 100mbs network to my old server (athlon 2600) in about 4 to 4.5 hours.

Something is terribly wrong here. I have had the system running for only a few days. I am thinking my problems are stemming from choking the ICH9R chipset. It runs (from my understanding) the USB, the sata and possible some of the PCI slots. Even the raid 5 array on the promise controller stinks, transferring to those drives is also slow, but not quite as bad. I haven't tested that array as much as the others, but I do it is not on par.

I am open to buying a real hardware RAID card, but dont want to spend $500+ if it isnt the solution. I am looking at something like a 3ware 9550SXU-8LP. I would like a card that supports 3 arrays, at least for now - at some point I might consolidate arrays as I purchase bigger drives... but for now.

I would appreciate suggestions to the throughput issues and possible RAID cards. I realize now I should not have tried to build my own server, or at least did a little more research before starting. I'm in to this project for too much $$ at this point to run away, but don't want to keep feeding it money if it wont help. At this point this server is useless.

I really appreciate any help.
3 answers Last reply
More about onboard raid killing system throughput
  1. i assume each raid group is a different logical drive?

    could you post transfer times between each drive to each one? this would isolate the ich9 bus from the usb/network ones if they are indeed shared.

    the promise controller is PCI not PCI-X right?
  2. correct, each raid group is a different logical drive. three in total (320gb, 750gb, and 500gb)

    and correct, the promise controller is PCI, but I do believe I have it in a PCI-X slot.

    im not really familiar with any testing software that would measure that. I do know that the drives choke with sustained data transfer. When transferring a 250MB file, between all other drives it takes about 6-7 seconds. When I try to transfer the file between multiple logical drives at the same time, the system chops and it take 30-40 seconds.

    When I try transferring a 1.2GB .pst file between the intel and promise arrays - both way several times - it takes about 4 minutes. If you break that down, you are looking at about 5 MB/s. I don't know how precise that is.

    If you know of an application for more accurately testing, I'm more than happy to give it a shot.

  3. Your problem is that neither one of these controllers has write-back cache and all the parity "work" is done in software. I suggest that you either get a RAID controller that has cache and hardware XOR or change your arrays to level 1.

    The parity isn't needed when reading hence the disparity between reading and writing.

Ask a new question

Read More

NAS / RAID Servers Storage