Sign in with
Sign up | Sign in
Your question

Will multiple controllers improve RAID-0 performance?

Tags:
  • SAS
  • NAS / RAID
  • Storage
Last response: in Storage
Share
September 8, 2006 12:42:39 AM

I'm setting up a 4 drive raid-0 array with 36Gb hitachi ultrastar SAS drives. My motherboard supports 2 PCI-X slots so in theory I could go 2 controller cards. The cards each have 2 internal SAS slots that can each handle 4 drives. Could I put one SAS drive on each slot (2 drives per card) for improved performance (if it is even possible)? Both cards would be identical adaptec cards.

OS and such resides on its own SAS drive.

Also, what impact does controller card memory have with Raid-0? I could spend more for a higher memory card rather than shell for 2 cards.

More about : multiple controllers improve raid performance

September 8, 2006 1:01:29 AM

Because of the way PCI-X works (true 64bit) Your getting a lot for performance from these drives already. I have a single 68pin u320 SCSI setup on a PCI-X bus. The performance was much better than on PCI 2.0 32bit bus. I think your idea may give you a slight edge, but you'll surly pay a premium for it. If you do it, kindly post some HDD stats via HDTECh of each config, I'm almost to the point of want SAS drives too.
September 8, 2006 1:05:12 AM

So setting up the single raid-0 array with two cards spanning it shouldn't be too troublesome?

Also, if I pull the trigger I'll post all the numbers. If I do it, I'm getting the hardware this weekend.
Related resources
September 8, 2006 1:05:20 AM

why not go raid-10
September 8, 2006 1:13:48 AM

Quote:
why not go raid-10


I could but there is no data of any deathly importance on the raid-0 array. The whole thing could die and I could have it back up in fairly short order. I'm after the full raid-0 performance in any event. Data protection just doesn't enter into the equation (much).
September 8, 2006 1:16:32 AM

You know, I have to pull back now that I ponder on it, I'm not sure how the two cards would see each others LUNs. That may be a question for Adaptec, i know its possible in software raid.

Quote:
rquinn19

Raid 10.....
Low storage efficiency limits potential array capacity, but great for servers. No point
September 8, 2006 1:26:16 AM

I don't think you will see any performance benefit from this at all, the combined speed of the drives will not exceed what the interface can support - only if the drives are able to push / pull an amount of data greater than the I/F supports would this be needed. You might get a benefit from having the OS & data drives on separate controllers but
I'm not too sure that would be meaningful either.
September 8, 2006 1:33:18 AM

Well, here is another question that I may post a second thread on: would I see greater/menaingful benefit out of a PCI-e SAS card than the PCI-X I am planning on?

Quote:
You might get a benefit from having the OS & data drives on separate controllers but


This is a possibility I am considered as well.
September 8, 2006 1:39:34 AM

What are you going to be running on this machine that you are so concerned with getting the utmost drive performance? Even in a heavy db system you might not tax a four drive RAID0 with either type of card possibly even standard SATA with a software RAID would suffice.
September 8, 2006 1:44:54 AM

It has nothing to do with being practical. At all. In fact, it's a ridiculous waste of money. But that's not the point. I'll probably run pacman on it.
September 8, 2006 2:01:47 AM

Well that may be the case but at a certain point you need to ask yourself if the $$ would be better spent on other components. Your original post also asked about memory on the board - it is basically a cache, if your application could make use of more memory - say in the case of a network db application that frequently uses the same data or a large amount of data coming in to the controller with the drives needing to play catch up - it would make sense to have larger memory on the controller.
If you are going to be opening / frequently saving huge Photoshop files all day long then you should build a system like this - other than that, an expensive proof of concept.
September 8, 2006 12:36:27 PM

PCI-X is pure 64bit. it can can reach up to 8Gbps per second, or around 1000mb per second. Because of it's parallel architecture it can sustain SCSI h upstream and down stream better than than PCI-E can at the moment. But as the PCI-E technology grow, controller card that support true unshared 16x will over take PCI-X. The real life diffrences of how the drive will perform on each interface is very small. Check Tom's for an column on the benchmarks about a year ago. The problem with the theological bandwidth that PCI-X 133 or 533 and PCI-E 8x and 16x may be to offer, is there hardware to sit on the platforms really has not evolved yet. For now, if you have PCI-X hardware, I would not think you would regret anything.
September 8, 2006 1:21:54 PM

If you want to run all four drives together, I'm pretty sure it would just create lag to have two cards, although little. Assuming the software even allows for it. Can't say I've tried it though, so i don't really know.

Besides, why not just another 2gigs of ram, for the price of that second controller card, those things are expensive..

Raid 5 would be the most cost effective redundant raid btw, read speed of four, write speed of rougly 3 drives.
Although if one drive fails you'll have to replace it to get online again, I think.
!