What board supports a PCI-e x8 RAID controller?

G

Guest

Guest
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

I'm looking to build a Linux file server with a couple of terabytes of
storage. Right now I'm leaning toward using an Areca ARC-12xx PCIe x8
RAID controller, but I haven't seen many motherboards that have PCIe x8
slots. I see that the A8N-SLI Deluxe can be configured with two PCIe
x8 slots in SLI mode--does that only work with SLI'd graphics cards, or
can I put a single graphics card in one slot and a PCIe x8 RAID card in
the other?

Or could I get a motherboard with a single PCIe x16 slot, put a PCIe x8
RAID card there, and use a PCI graphics card? (I don't need fast
graphics, since this will be a file server.)

Or would I be best off just using a PCI-X RAID card instead?
 

Paul

Splendid
Mar 30, 2004
5,267
0
25,780
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

In article <1111696429.218743.118190@o13g2000cwo.googlegroups.com>,
stanmuffin@hotmail.com wrote:

> I'm looking to build a Linux file server with a couple of terabytes of
> storage. Right now I'm leaning toward using an Areca ARC-12xx PCIe x8
> RAID controller, but I haven't seen many motherboards that have PCIe x8
> slots. I see that the A8N-SLI Deluxe can be configured with two PCIe
> x8 slots in SLI mode--does that only work with SLI'd graphics cards, or
> can I put a single graphics card in one slot and a PCIe x8 RAID card in
> the other?
>
> Or could I get a motherboard with a single PCIe x16 slot, put a PCIe x8
> RAID card there, and use a PCI graphics card? (I don't need fast
> graphics, since this will be a file server.)
>
> Or would I be best off just using a PCI-X RAID card instead?

Perhaps you should contact them, and see if they are working on
any Nvidia chipset boards ?

http://www.areca.us/contact.htm
http://areca.us/CardCompatibilityList.pdf

I cannot think of a reason why PCIe x8 shouldn't work, but with
the amount of money involved, I'd want to get some confirmation
from Areca. (Since the video card slots don't have a GART,
they should just look like vanilla PCI Express slots, just as
long as the Nvidia drivers don't mess with them. You might
also contact Nvidia and see if they can answer your question.
If you try emailing Asus, don't hold your breath.)

Paul
 

Paul

Splendid
Mar 30, 2004
5,267
0
25,780
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

In article <1111696429.218743.118190@o13g2000cwo.googlegroups.com>,
stanmuffin@hotmail.com wrote:

> I'm looking to build a Linux file server with a couple of terabytes of
> storage. Right now I'm leaning toward using an Areca ARC-12xx PCIe x8
> RAID controller, but I haven't seen many motherboards that have PCIe x8
> slots. I see that the A8N-SLI Deluxe can be configured with two PCIe
> x8 slots in SLI mode--does that only work with SLI'd graphics cards, or
> can I put a single graphics card in one slot and a PCIe x8 RAID card in
> the other?
>
> Or could I get a motherboard with a single PCIe x16 slot, put a PCIe x8
> RAID card there, and use a PCI graphics card? (I don't need fast
> graphics, since this will be a file server.)
>
> Or would I be best off just using a PCI-X RAID card instead?

Doesn't look good. DFI tech support says x8/x8 mode is for
video/video cards, while x16/x2 (or x16/x1 on some other mobos)
can be used for video/storage.

http://forums.storagereview.net/index.php?showtopic=18757&hl=areca

This is looking more like a question for someone at Nvidia.

Or, perhaps someone in this group, can take a look at their
Device Manager entries and see if there is any evidence of
what's up with the SLI slots. The naming convention for the
slots might give some clue as to what is possible with
them.

Nvidia does make more professional chipsets. For example,
this one has a few more possibilities slot wise, at the
cost of an extra processor.

ftp://ftp.tyan.com/datasheets/d_s2895_100.pdf

Tyan also has some Intel PCI-E server boards.

Paul
 
G

Guest

Guest
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

stanmuffin@hotmail.com wrote:
> I'm looking to build a Linux file server with a couple of terabytes of
> storage. Right now I'm leaning toward using an Areca ARC-12xx PCIe x8
> RAID controller, but I haven't seen many motherboards that have PCIe x8
> slots. I see that the A8N-SLI Deluxe can be configured with two PCIe
> x8 slots in SLI mode--does that only work with SLI'd graphics cards, or
> can I put a single graphics card in one slot and a PCIe x8 RAID card in
> the other?
>
> Or could I get a motherboard with a single PCIe x16 slot, put a PCIe x8
> RAID card there, and use a PCI graphics card? (I don't need fast
> graphics, since this will be a file server.)
>
> Or would I be best off just using a PCI-X RAID card instead?
>

I can't see why the x16 slot wouldn't work as x8 with the board set to
SLI mode - as far as I know there's nothing magic about the PCI Express
slot configuration as far as SLI is concerned. Might want to get some
confirmation of that before buying though..

--
Robert Hancock Saskatoon, SK, Canada
To email, remove "nospam" from hancockr@nospamshaw.ca
Home Page: http://www.roberthancock.com/
 
G

Guest

Guest
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

Paul wrote:
>
> Doesn't look good. DFI tech support says x8/x8 mode is for
> video/video cards, while x16/x2 (or x16/x1 on some other mobos)
> can be used for video/storage.
>
> http://forums.storagereview.net/index.php?showtopic=18757&hl=areca

Their response doesn't give me much confidence in its accuracy however.
Some of the other posts in that thread seem clearly wrong - at least on
the Asus board, it's entirely possible to run with the selector card in
SLI mode with one card in x8 mode, as I ran that way for a while until
the second card showed up.

>
> This is looking more like a question for someone at Nvidia.
>
> Or, perhaps someone in this group, can take a look at their
> Device Manager entries and see if there is any evidence of
> what's up with the SLI slots. The naming convention for the
> slots might give some clue as to what is possible with
> them.

Off the root PCI bus entry, there are 4 entries for nForce4 PCI Express
Root Port, they all look the same. Two of them have video card entries
underneath them (I'm using dual 6600GTs in SLI) and other two have
nothing (the x1 slots are empty). I think each slot is basically seen as
just a PCI-to-PCI bridge.

--
Robert Hancock Saskatoon, SK, Canada
To email, remove "nospam" from hancockr@nospamshaw.ca
Home Page: http://www.roberthancock.com/
 

Paul

Splendid
Mar 30, 2004
5,267
0
25,780
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

In article <MoZ0e.787506$6l.514923@pd7tw2no>, Robert Hancock
<hancockr@nospamshaw.ca> wrote:

> Paul wrote:
> >
> > Doesn't look good. DFI tech support says x8/x8 mode is for
> > video/video cards, while x16/x2 (or x16/x1 on some other mobos)
> > can be used for video/storage.
> >
> > http://forums.storagereview.net/index.php?showtopic=18757&hl=areca
>
> Their response doesn't give me much confidence in its accuracy however.
> Some of the other posts in that thread seem clearly wrong - at least on
> the Asus board, it's entirely possible to run with the selector card in
> SLI mode with one card in x8 mode, as I ran that way for a while until
> the second card showed up.
>
> >
> > This is looking more like a question for someone at Nvidia.
> >
> > Or, perhaps someone in this group, can take a look at their
> > Device Manager entries and see if there is any evidence of
> > what's up with the SLI slots. The naming convention for the
> > slots might give some clue as to what is possible with
> > them.
>
> Off the root PCI bus entry, there are 4 entries for nForce4 PCI Express
> Root Port, they all look the same. Two of them have video card entries
> underneath them (I'm using dual 6600GTs in SLI) and other two have
> nothing (the x1 slots are empty). I think each slot is basically seen as
> just a PCI-to-PCI bridge.

So, what's needed, is the cheapest x1 test card, that can be stuffed
into that socket for a test. Set the motherboard to x8/x8 mode, and
see if the x1 card is detected.

Here is a PCI-E x1 GbE Ethernet card for $38
http://www.buy.com/retail/product.asp?sku=10364628&SearchEngine=PriceWatch&SearchTerm=10364628&Type=PE&Category=Comp&dcaid=1688

Paul
 
G

Guest

Guest
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

PCIe is a practical joke gone too far. Stick with PCI-X for servers.
 
G

Guest

Guest
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

Lachoneus wrote:
> PCIe is a practical joke gone too far. Stick with PCI-X for servers.

Care to explain that?

Ben
--
A7N8X FAQ: www.ben.pope.name/a7n8x_faq.html
Questions by email will likely be ignored, please use the newsgroups.
I'm not just a number. To many, I'm known as a String...
 
G

Guest

Guest
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

>> PCIe is a practical joke gone too far. Stick with PCI-X for servers.
>
> Care to explain that?

With PCI-X, you have a reasonable chance of finding compatible cards and
motherboards. The different PCI-X speed grades are backward compatible.
You can plug your existing PCI cards into PCI-X slots, and you can
even plug some PCI-X cards into a PCI slot.

PCIe, on the other hand, comes with a half-dozen different slots, all
incompatible with each other and with PCI. And it doesn't offer any
tangible advantage over existing standards. I have a hard time
believing the point of PCIe is not to force people to buy lots of new
hardware, a la Intel's attempted Rambus coup. And BTX. Had NVIDIA and
ATI not jumped on the PCIe bandwagon so soo, I think it would have
fizzled out completely.
 
G

Guest

Guest
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

Lachoneus wrote:
>>>PCIe is a practical joke gone too far. Stick with PCI-X for servers.
>>
>>Care to explain that?
>
>
> With PCI-X, you have a reasonable chance of finding compatible cards and
> motherboards. The different PCI-X speed grades are backward compatible.
> You can plug your existing PCI cards into PCI-X slots,

True enough - but if you put a 33 MHz PCI card into a PCI-X slot
then you drop *all* slots on that bus down to 33 MHz speed. All
slots on a PCI or PCI-X bus run at the speed of the slowest
device on the bus.

> and you can
> even plug some PCI-X cards into a PCI slot.

And the result is that the cards run at only 33 MHz/32 bits.

Kind of pointless to spend big bucks on PCI-X components and not
also get a motherboard with sufficient PCI-X slots to host all of
those components.


>
> PCIe, on the other hand, comes with a half-dozen different slots, all
> incompatible with each other

Not true. A 1x device can go into a 1x, 2x, 4x, 8x, 16x or 32x
slot. Similarly, a 2x device can go into a 2x, 4x, etc slot.

In other words, a PCI-E slot must provide the *minimum* number of
PCI-E lanes that a device requires - if the slot provides more
lanes than are required, the excess lanes are ignored.



> and with PCI.

Who cares. PCI was incompatible with ISA but that didn't stop
people from adopting PCI. AGP is incompatible with PCI and VLB
but that didn't stop people from adopting AGP.



> And it doesn't offer any
> tangible advantage over existing standards. I have a hard time
> believing the point of PCIe is not to force people to buy lots of new
> hardware, a la Intel's attempted Rambus coup. And BTX. Had NVIDIA and
> ATI not jumped on the PCIe bandwagon so soo, I think it would have
> fizzled out completely.
 
G

Guest

Guest
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

Lachoneus wrote:
>>> PCIe is a practical joke gone too far. Stick with PCI-X for servers.
>>
>>
>> Care to explain that?
>
>
> With PCI-X, you have a reasonable chance of finding compatible cards and
> motherboards. The different PCI-X speed grades are backward compatible.
> You can plug your existing PCI cards into PCI-X slots, and you can even
> plug some PCI-X cards into a PCI slot.
>
> PCIe, on the other hand, comes with a half-dozen different slots,

x1, x2, x4, x8, x16 - and possibly x32, yes, thats quite a few.

> all incompatible with each other and with PCI.

Any card will fit in a slot that is the same width OR larger.

Everything above the physical layer is 100% compatible with PCI.

> And it doesn't offer any
> tangible advantage over existing standards.

There are many advantages to PCI express over PCI-X or PCI.

Much of it I covered here:
http://groups.google.co.uk/groups?q=g:thl3979874300d&&selm=1110912886.e6708baf8312ce21572beb8ffe5ba7c7%40teranews

Sorry about the humungous link.

If you want a summary:
- More Bandwidth
- Full Duplex (lower latency)
- Point to Point (lower latency)
- Isochronous Transfers (guaranteed latency)

> I have a hard time
> believing the point of PCIe is not to force people to buy lots of new
> hardware, a la Intel's attempted Rambus coup. And BTX. Had NVIDIA and
> ATI not jumped on the PCIe bandwagon so soo, I think it would have
> fizzled out completely.

PCs move forwards, compatibility sometimes gets broken, live with it -
it makes things better.

Ben
--
A7N8X FAQ: www.ben.pope.name/a7n8x_faq.html
Questions by email will likely be ignored, please use the newsgroups.
I'm not just a number. To many, I'm known as a String...
 
G

Guest

Guest
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

Lachoneus wrote:
> PCIe, on the other hand, comes with a half-dozen different slots, all
> incompatible with each other and with PCI.

They are not all incompatible with each other - any card will fit in a
slot which is the same size or larger.

--
Robert Hancock Saskatoon, SK, Canada
To email, remove "nospam" from hancockr@nospamshaw.ca
Home Page: http://www.roberthancock.com/
 
G

Guest

Guest
Archived from groups: alt.comp.periphs.mainboard.asus (More info?)

stanmuffin@hotmail.com wrote in message news:<1111696429.218743.118190@o13g2000cwo.googlegroups.com>...
> I'm looking to build a Linux file server with a couple of terabytes of
> storage. Right now I'm leaning toward using an Areca ARC-12xx PCIe x8
> RAID controller, but I haven't seen many motherboards that have PCIe x8
> slots.

Take a look at boards by SuperMicro and Intel:

http://www.supermicro.com/products/motherboard/Xeon800/E7520/X6DHE-G2.cfm
http://www.supermicro.com/products/motherboard/Xeon800/E7520/X6DHT-G.cfm
http://www.intel.com/design/servers/boards/se7520bd2/index.htm
http://www.intel.com/design/servers/boards/se7520af2/index.htm

I can't confirm that the PCI-E Areca is compatible (no experience there),
but overall I have good experience with Areca PCI-X and only the best
experience with SuperMicro (and Intel chipsets) - your chance here is good.

From ASUS, perhaps this one would match the spec:
http://www.asus.com/products4.aspx?l1=3&l2=17&l3=0&model=309&modelmenu=1
again, no experience there.

Frank Rysanek