Hardware RAID Controllers and P35 Motherboards

mattr1963

Distinguished
Nov 23, 2007
5
0
18,510
Does anyone know which P35 motherboards are compatible with 3ware RAID controllers? I've already searched the 3ware site and they only list older 965 and 975 chipset boards. My raptors are compatible - Yea! Software RAID blows rainbow flavored chunks!
 


Any reason why you want the hardware raid controller? Not really usefull for desktops unless maybe your using raid5 etc.

Should be fine on any Intel chipset - intel sticks to specifications unlike crud via and sis chipsets.
 

mattr1963

Distinguished
Nov 23, 2007
5
0
18,510
My plan was to run RAID 5. Have 4 Raptors. Looking to run true RAID with 3ware controller on a P35 chipset motherboard.
 

Andrius

Distinguished
Aug 9, 2004
1,354
0
19,280
If your 3ware card is PCI your performance gain (over the onboard sollution for desktop usage) will be slim to none.

If it's a 4x PCIe or PCIX it would make more sense.

 

blackened144

Distinguished
Aug 17, 2006
1,051
0
19,280
The card would be choked with the 133MB/s limit on the entire PCI bus. You don't have to use software RAID, just get a P35 board with on board RAID. It may not be as optimal as running on a dedicated RAID card, but the on board would still be faster than any RAID solution running through the PCI bus. Like Andrius said, if it was a PCIe or PCIX card it would make more sense.
 

Advertisement

Distinguished
Dec 6, 2007
3
0
18,510
I don't know if it makes a significant difference, but the P35 chipset has a 16 lane PCIe i/f on the MCH which cannot be split (i.e. into 2 PCIe x8 lane). The other PCIe i/f is on the ICH an there are organized as 6 by x1 lane. These can be group so an MB can have 2 PCIe x1 and 1 PCIe x4.

See intel pdf here: http://download.intel.com/products/chipsets/P35/317304.pdf

A chipset like the X38 has 2 PCIe x16 lane on the MCH. See intel pdf here: http://download.intel.com/products/chipsets/X38/317310.pdf

The nVidia 680 sli has 1 PCIe x16 on the SPP and the second/third PCIe x16, on the MCP. See link to PC Pers here; http://www.pcper.com/article.php?aid=320&type=expert

One thing to note is that intel's DMI between MCH and ICH is 2GB/s while nVidia's HT between SPP and MCP is 8GB/s.
 

VTOLfreak

Distinguished
Jan 23, 2006
77
0
18,630
Don't be so quick to say that hardware RAID doesn't matter unless your running RAID5. Most hardware RAID controllers come with cache wich can be set to write-back mode. That alone makes a noticable difference on the system...

I'm running an Areca 1220 on an Asus P5K-E and I would never go back to onboard RAID.

On a P35 crossfire motherboard you'd be sticking the card into a 16x slot that's wired up for 4 lanes. With 4 lanes it can only push 1GB/sec. That's half the bandwidth between the ICH and MCH. That leaves plenty for all the other stuff hanging off the ICH.

To get to the point where your array is bottlenecked because the card is running on only 4 lanes you'd have to use an array wich won't fit into most cases. Im running 4 7200RPM drives and I manage to do about 280MB/sec. Also note that drives can either write or read, not both at thesame time while the PCI-E bus is full duplex. So in a real scenario the PCI-E bus would be even less taxed because you'd be doing a mix of reading and writing.

I do agree with the point about not running a hardware RAID controller on the PCI bus. Not only is the card seriously bottlenecked, your also sharing that bandwidth with every other PCI device on the system.