Motherboard for 3 dual-width PCIe 3.0 GPUs, and single Core i7 3930K

Status
Not open for further replies.

healthyman

Honorable
Jun 25, 2012
17
0
10,510
I have a challenge.

Is there a motherboard that can support:
1 CPU: LGA2011 Socket, Intel Core i7 3930K
3 GPUs: dual width, PCIe 3.0 x16 (x16 mode).

I have found several motherboards that can support 3 dual width PCIe 3.0 x16 GPUs and the Core i7 3930K CPU, but in most cases, one or more GPUs will have a PCIe transfer speed of only x8 mode rather than x16, because the PCIe x16 slots are such that the operate at x16 mode speed when the neighboring slot is unoccupied, and at x8 mode speed when the neighboring slot is occupied.

Any suggestions ?
 
Solution
If you can afford the ASUS Rampage IV Extreme then buy it and stop looking -- you found the best.

Technically, the SB-E can support PCIe 3.0 and on AMD GPU's they all run PCIe 3.0 but on nVidia you need to run this mod - http://nvidia.custhelp.com/app/answers/detail/a_id/3135/session/L3RpbWUvMTM0MDIyMzU2OC9zaWQvaDEzbE45X2s= ; nVidia took the approach of validation and decided to Disable PCIe 3.0 support thought the Registry.

The SB-E only has 32 PCIe lanes to the GPUs, 40 PCIe lanes in total, and even though I've seen "( x16/x8/x8/x8) in reality it's -> ( x8/x8/x8/x8), and the other 8 PCIe lanes are 2.0 to the remainder of the chipset and all other PCIe bandwidth requirements.

healthyman

Honorable
Jun 25, 2012
17
0
10,510
Thanks for your response:

"LGA 2011 (x79) uses PCI-E 2.0"

Then why do all the motherboards have mainly PCIe 3.0 slots ?

"thats because x79 has 40 PCI-E lanes 16 * 3 = 48, so the best config would be 16 * 2 + 8 * 1 = 40"

That depends on the motherboard doesn't it ?
This one http://www.asus.com/Motherboards/Intel_Socket_2011/Rampage_IV_Extreme/#specifications
can have:

4 x PCIe 3.0/2.0 x16 ( x16/x8/x8/x8, red)
1 x PCIe 3.0/2.0 x16 (x8 mode, gray)
1 x PCIe 2.0 x1

which looks like it has 49 lanes in total right ?

In theory maybe there could be a configuration where I could have 3 GPUs in there, all at x16 ?
 
If you can afford the ASUS Rampage IV Extreme then buy it and stop looking -- you found the best.

Technically, the SB-E can support PCIe 3.0 and on AMD GPU's they all run PCIe 3.0 but on nVidia you need to run this mod - http://nvidia.custhelp.com/app/answers/detail/a_id/3135/session/L3RpbWUvMTM0MDIyMzU2OC9zaWQvaDEzbE45X2s= ; nVidia took the approach of validation and decided to Disable PCIe 3.0 support thought the Registry.

The SB-E only has 32 PCIe lanes to the GPUs, 40 PCIe lanes in total, and even though I've seen "( x16/x8/x8/x8) in reality it's -> ( x8/x8/x8/x8), and the other 8 PCIe lanes are 2.0 to the remainder of the chipset and all other PCIe bandwidth requirements.
 
Solution

healthyman

Honorable
Jun 25, 2012
17
0
10,510
Thank you VERY MUCH .. you might be saving my day here .. Jaquith,
Most of the MSI mobo's have max memory supported of 128GB (for example: http://eu.msi.com/product/mb/X79A-GD45--8D-.html#/?div=Detail ) , while all the Asus ones have max of 64GB ... which makes me want to stay away from Asus.

Also, when you say SB- only has "32 lanes to the GPUs"
Are you saying that can not have (x16, x16, x8) with SB-E ???
If there's 40 lanes in total, how can there only be 32 to the GPUs ? Like, how does it know where I'm putting the GPUs ?

Also if it only as 40 lanes in total, then all these X79 mobo's built for the core i7 are overspecked, because they usually have way more PCIe 3.0 lanes.


 
ASUS will also handle 16GB/sticks when non-ECC (consumer) RAM is available; currently it's RDIMM (Server/Workstation RAM for Xeon/Opteron). The specs are like the X58 that originally was 6x4GB (24GB) but any i7-9XX I've seen can run 6x8GB (48GB).

Most 3-WAY's are running (x16, x8, x8) but it depends on the MOBO and what PCIe slots and in particular how they're shared. Further, the ONLY instance where I've seen any advantage to PCIe 3.0 over PCIe 2.0 is with 4-WAY GTX 680 with >HD resolutions; nice Thread - http://www.evga.com/forums/tm.aspx?m=1537816

It's not 'IF' it's the Intel spec (search 40) - http://en.wikipedia.org/wiki/LGA_2011

--

/edit - You are confusing Slots (PCIe x16) or (PCIe x8) with Bandwidth (x16/x16 or x16/x8/x8 or x8/x8/x8/x8).
 

healthyman

Honorable
Jun 25, 2012
17
0
10,510
Wow Jacquith!
THat's really helpful !

So based on that wiki page, there's 40 lanes for the "primary PCI" and 8 lanes for the "secondary PCI" ... since the GTX GPUs require 16 lanes .... only "primary PCI" lanes can be used for GPUs ??? Did I understand you correct ?

In that case, is it possible for me to find a configuration where can have 3 GPUs at speeds: (x16 x16 x8) using the 40 lanes of the "primary PCI", and then this RAID controller in the "secondary PCI" ??

 

healthyman

Honorable
Jun 25, 2012
17
0
10,510
Also, Jacquith, further to my reply above,
I'm having trouble following "ASUS will also handle 16GB/sticks when non-ECC (consumer) RAM is available; currently it's RDIMM (Server/Workstation RAM for Xeon/Opteron)"

If you don't mind, could you please rephrase that for me ?
 
Duh nope, SB-E (LGA 2011) is 40 PCIe lanes 8 of which are PCIe 2.0 and for the chipset, etc and the 32 PCIe PCIe 3.0 (with mod) - period.

GPU Bandwidth (x16/x16 or x16/x8/x8 or x8/x8/x8/x8)

The LGA 1155 is 24 PCIe lanes 8 of which are PCIe 2.0 and for the chipset, and the 16 IB = PCIe PCIe 3.0 or 16 SB = PCIe 2.0- period.

GPU Bandwidth (x16/x0 or x8/x8) ; unless the Z77 uses a third-party chipset e.g. PLX
 

healthyman

Honorable
Jun 25, 2012
17
0
10,510
The wiki page you sent said there's 40 Main PCI lanes, and 8 Secondary PCI lanes .. so it's not possible for me to use 40 lanes (PCIe 2.0) or 32 lanes (PCIe 3.0) for GPUs, in the Main PCI , and use the secondary PCI for my RAID controller ?

 
You are reading only what you want to see and we're going in circles. 40 Total split into 32 & 8 as I posted above more than once.

SB_E5-635x463.jpg
 
WIKI has had info wrong and/or unclear before, in this instance it's unclear.

Assuming you meant the 'Intel RAID controller' that's part of the DMI 2.0, in this case Intel X79 controller, which has 20 Gbit/s with a x4 link bandwidth {20 Gbit/s = 2,560 MB/s} ; DMI 2.0 - http://en.wikipedia.org/wiki/Direct_Media_Interface Other non-native chipsets share PCIe 2.0 x8 lanes {8*500MB/s = 4000MB/s}.

2 SATA3 = 600MB/s (ea) * 2 = 1200MB/s
4 SATA2 = 300MB/s (ea) * 4 = 1200MB/s
14 USB 2.0 = 60MB/s (ea) * 14 = 840MB/s

Total = 3,240MB/s max theoretical bandwidth.
 

healthyman

Honorable
Jun 25, 2012
17
0
10,510




Thanks again for replying Jaquith !

What don't you like about Highpoint Rocket ? It's just a RAID controller isn't it ?

Okay so the use is this... I only have 64GB of RAM and I'm dealing with 1.2 TB matrices ... when the RAM runs out the data gets moved to SWAP .... I've tried this in the past with an HDD but it was extremely slow just do deal with 1.2GB let alone 1.2 TB !!

I'm hoping that trying this SSDs would work better ... I have three 512GB SSDs that I hope to combine in a RAID 0 to get my 1.2TB matrix stored temporarily for some computations.

What's your opinion on Dereck47's reply to my post on this thread: http://www.tomshardware.co.uk/forum/286127-14-match-speeds-storage-devices-single-raid#t1935093 ?
 
644L - http://www.highpoint-tech.com/USA_new/series_rr640L.htm

Reason, Highpoint's are cheap and not nearly as reliable as LSI or Adaptec.

Thread - all Intel X79 SATA ports 'can' be linked together as a 6 drive array, but any parity RAID e.g. 5 would be much slower. 'Real' speed depends on file size (structure), and if everything is in 4KB then SATA2 vs SATA3 per drive means nothing - neither is saturated.

IF you want more speed the choices are: RAM Drive or RAID 0 with a 'Good' Controller and one with both 1GB Cache + Battery back-up and e.g. LSI with it's (FastPath SW Key).

Example look at my 4KB:
ATTO_Corsair-GT-RSTE.jpg
 
Status
Not open for further replies.