48 PCI-e 3.0 lanes on a CPU with only 16 such lanes. How does that work?

rutski89

Honorable
Aug 11, 2013
28
0
10,530
I'm new to hardware, and at the moment I'm planning my very first build.

In the process, I just realized that I have absolutely no idea how PCI lanes are allocated, or about how they and function. I just discovered something that seems so contradictionary and broken that it can't possibly be true, and hence comes my confusion. Firstly, here is the relevant CPU, and the motherboard that I'm looking at:

http://ark.intel.com/products/75462/
http://www.asus.com/Motherboards/P9D_WS/#specifications

The Xeon E3-1245 says this of itself:
"Max # of PCI Express Lanes: 16"

Yet the motherboard, which has the C226 chipset specifically designed exactly for this Xeon E3-1200 line, says of this of itself:
"3 x PCIe 3.0 x16"

I've checked, and there no Xeon E3-12xx which lists more than 16 PCI-e lanes in its specifications. For a moment I thought that perhaps the i7 4th Gen, which this ASUS board also takes, was perhaps the chip with more than 16 PCI Express lanes. But then I went and had a look at every single i7, and none of them do.

Even more puzzling to me is that people regularly put multiple 16x PCI-e 3.0 GPUs into these things with SLI or CrossFireX, and the ASUS board's spec page even emphatically declares that it has good CrossFireX support. How can that be when the CPU claims to have only 16 lanes?

Moreover, any old single graphics card is likely to match to the CPU's 16 PCI-e slots right off the bat, and there where is there room left for even smaller expansion cards?

I must be really misunderstanding something about what it means for a CPU to "Have 16 PCI-e Lanes," it can't possibly mean what I think it does.

Very confused,
Many thanks for reading,
-Patrick
 
Solution
Few thing with CPU info you were reading is 16 lanes for video cards.
Use one card at 16x or two at eight x. The 11xx chipset and the server chipset it was bases off of has 16 lanes for video. If three cards are used some mb uses a muliplyer chip. The chip would give more lanes but adds delay time to the pic bus. The 2011 mb chipset has more useable pic lanes without needing a mux chipset.
Few thing with CPU info you were reading is 16 lanes for video cards.
Use one card at 16x or two at eight x. The 11xx chipset and the server chipset it was bases off of has 16 lanes for video. If three cards are used some mb uses a muliplyer chip. The chip would give more lanes but adds delay time to the pic bus. The 2011 mb chipset has more useable pic lanes without needing a mux chipset.
 
Solution

The_New_Normal

Honorable
Sep 12, 2013
1
0
10,510
The cpu supports 16 lanes.

The mobo has three size 16 slots, which can have the CPU's 16 lanes allocated a few different ways (ie 8/4/4, 16,0/0, 8,0,8)

For boards with more lanes than the cpu can run, additional on board chips are used to run them (PLX).

EDIT: in this case it's the chipset which runs the extra lanes. Google "c226 block diagram" and all will be revealed :)
 

NerdIT

Distinguished
For anything on the PCIe 3.0 Bus you are limited to the amount of lanes your CPU can support.

But the thing is that even if you had a single card at x16 and it got knocked down to x8...I highly doubt you would see any real-application/game performance difference.
 

NerdIT

Distinguished
But yeah...no matter how many lanes your board supports at PCIe 3..specifically..you are capped at what the CPU can handle.

Once again, this is just the PCIe 3 bus, and as The_New_Normal stated, you would need a PLX chip to offload/add more lanes.
 

rtw915

Reputable
Dec 8, 2015
1
0
4,510
Not sure if you are still looking for an answer, but a picture says a thousand words. These diagrams are not specific to your processor but you will get the idea.

This link show how older computers were configured (Intel Hub Architecture). They had a north and a south bridge.
http://cacafatek.blog.com/files/2011/01/370px-Motherboard_diagram.svg_.png

Intel integrated the Northbridge into their CPU’s and changed the name of the Southbridge to Platform Controller Hub (PCH) shown in the diagram below. I believe this change happened with the Nehalem microarchitecture which was after the core 2 era.
http://h20564.www2.hp.com/hpsc/doc/public/imageServlet?DOCID=emr_na-c02808794-1/c02808800.jpg

So in your question the CPU supports 16 PCIe lanes ( some newer CPU’s support more than this) independently and has a bus to connect to the PCH. This bus is called DMI. All of the i/o ( sata, usb, nic, audio, and even extra PCIe lanes) hang off of the PCH. It is possible to saturate the DMI and cause other devices to slow down. There is obviously more latency as well since it must go through the PCH and over the DMI.

As a side note many PCIe x16 (mechanically) only look like x16. Electrically they are x8, x4 or sometimes even x1.

On top of all of this stuff some motherboards also include a PCIe switch (PLX). You can think of this as a networking switch, but for PCIe integrated into the Motherboard. Here is a diagram below
http://www.sotechdesign.com.au/wp-content/uploads/2013/07/asus-p9d-e4l-block-diagram.png