I'm setting up a 4 drive raid-0 array with 36Gb hitachi ultrastar SAS drives. My motherboard supports 2 PCI-X slots so in theory I could go 2 controller cards. The cards each have 2 internal SAS slots that can each handle 4 drives. Could I put one SAS drive on each slot (2 drives per card) for improved performance (if it is even possible)? Both cards would be identical adaptec cards.
OS and such resides on its own SAS drive.
Also, what impact does controller card memory have with Raid-0? I could spend more for a higher memory card rather than shell for 2 cards.
Because of the way PCI-X works (true 64bit) Your getting a lot for performance from these drives already. I have a single 68pin u320 SCSI setup on a PCI-X bus. The performance was much better than on PCI 2.0 32bit bus. I think your idea may give you a slight edge, but you'll surly pay a premium for it. If you do it, kindly post some HDD stats via HDTECh of each config, I'm almost to the point of want SAS drives too.
I could but there is no data of any deathly importance on the raid-0 array. The whole thing could die and I could have it back up in fairly short order. I'm after the full raid-0 performance in any event. Data protection just doesn't enter into the equation (much).
I don't think you will see any performance benefit from this at all, the combined speed of the drives will not exceed what the interface can support - only if the drives are able to push / pull an amount of data greater than the I/F supports would this be needed. You might get a benefit from having the OS & data drives on separate controllers but
I'm not too sure that would be meaningful either.
What are you going to be running on this machine that you are so concerned with getting the utmost drive performance? Even in a heavy db system you might not tax a four drive RAID0 with either type of card possibly even standard SATA with a software RAID would suffice.
Well that may be the case but at a certain point you need to ask yourself if the $$ would be better spent on other components. Your original post also asked about memory on the board - it is basically a cache, if your application could make use of more memory - say in the case of a network db application that frequently uses the same data or a large amount of data coming in to the controller with the drives needing to play catch up - it would make sense to have larger memory on the controller.
If you are going to be opening / frequently saving huge Photoshop files all day long then you should build a system like this - other than that, an expensive proof of concept.
PCI-X is pure 64bit. it can can reach up to 8Gbps per second, or around 1000mb per second. Because of it's parallel architecture it can sustain SCSI h upstream and down stream better than than PCI-E can at the moment. But as the PCI-E technology grow, controller card that support true unshared 16x will over take PCI-X. The real life diffrences of how the drive will perform on each interface is very small. Check Tom's for an column on the benchmarks about a year ago. The problem with the theological bandwidth that PCI-X 133 or 533 and PCI-E 8x and 16x may be to offer, is there hardware to sit on the platforms really has not evolved yet. For now, if you have PCI-X hardware, I would not think you would regret anything.
If you want to run all four drives together, I'm pretty sure it would just create lag to have two cards, although little. Assuming the software even allows for it. Can't say I've tried it though, so i don't really know.
Besides, why not just another 2gigs of ram, for the price of that second controller card, those things are expensive..
Raid 5 would be the most cost effective redundant raid btw, read speed of four, write speed of rougly 3 drives.
Although if one drive fails you'll have to replace it to get online again, I think.