Questions about how Lanes (PCIe, CPU vs PCH) work

Etherel15

Distinguished
Feb 2, 2011
10
0
18,510
Researching for a future build in a few months, and I'm looking for some clarification on how the listed lanes work for a mobo chipset.

Using the Z370 as an example (I'm pre-researching now, but ultimately waiting for Z390's), it took me forever to clarify that the CPU (Coffee Lake LGA1151 300 type i7) has 16 lanes built-in which are used for graphics cards, and that the listed 24 lanes for the 370 chipset is in addition to those 16, not the total including them. But then that lead me to wonder how the PCIe 3.0 lanes were used, or made available for use.

Using the Asus ROG Maximus X Hero (wifi) for further example, I know that the 16 lanes from the CPU are what will be used by my GPU. But are the additional 24 lanes used by the mobo's other connections (SATA, USB, wifi, M.2 etc) completely utilized by the mobo, or are some of them left open to be used for additional addon devices thru any of the PCIe 3.0 expansion slots? this mobo's PCIe expansions are listed as:

2 x PCIe 3.0 x16 (x16 or dual x8 or x8/x4/x4)
1 x PCIe 3.0 x16 (x4 mode) *1
3 x PCIe 3.0/2.0 x1
Reference

I know if I use 1 GPU, it would use x16, and if I used 2, it would pair them at x8 x8, but obviously there are extra PCIe slots on my mobo besides the 2 main PCIe 3.0x16. So lets say I wanted to SLI 2 GPUs, can I then possibly use a couple of the smaller PCIe's for other devices (say a soundcard, additional SSD, Capture Card, etc) and how would adding those affect the bandwidth for the devices I have plugged into the PCIe slots?

My burning question boils down to this: is my MAX amount of lanes I can have between ALL my PCIe slots total to 16x? do additional expansion cards (that are NOT graphics cars) take away from my GPU's, potentially throttling a single GPU to 8x, and SLI to 8x,4x, and if so, how does that affect things?

The more detailed the answer, and how mobo Lanes work the better, I'm hoping to learn, and a lot of descriptions seem to assume people to know more than they do, leaving me often confused. Thanks a bunch all!

 
Solution
No, what @justin.m.beauvais said is incorrect.

The CPU provides 16 lanes. On Z series mobos these are able to be split into x8/x8 or x8/x4/x4 if required. I think the specs page for the Maximus X Hero may have an error, I don't think that particular board supports x8/x4/x4 from the CPU, based on what the manual says. So the top two physical x16 slots get CPU lanes, and are completely independent from the other x16 slot, the three x1 slots, and any other ports/connectors.

Extra slots/ports/connectors may share bandwidth, as described for the particular mobo. Populating an M.2 slot may disable some SATA ports, or a PCIe slot, etc. You'd have to read the manual for that mobo. Additionally, all the lanes provided by the chipset share a...
Ok, where to start. You have 16 PCI-E lanes just for video. Each PCI-E slot otherwise is going to either be a 4x or a 1x. M.2 slots also have PCI-E lanes dedicated to them. You also have PCI-E lanes dedicated to the SouthBridge chipset that most of your other I/O operates off of, and it can even have its own PCI-E lanes that other things connect to. With the exception of the lanes to the SouthBridge and the crazy sharing that has to happen there, every PCI-E lane is discreet. Saturating lanes will not affect performance on other lanes.

I'm not sure if I answered all of your questions or not... I feel like I didn't because your post is long.
 

Etherel15

Distinguished
Feb 2, 2011
10
0
18,510
@justin.m.beauvais

Lets see if I I picked up that you're saying! I got that the First 2 PCIe 3.0x16 slots are the x16 that pipe directly to the CPU (and therefore do either x16 x0 or x8 x8), and that the other PCIe expansion slots are each one discreet, but run at max x4 rating and go through the chipset?

Meaning that (as in my example asus mobo) the 2 x PCIe 3.0 x16 are to be used for GPUs, and directly use the CPU's built in x16 lanes, and the other lanes, (the 1 PCIe 3.0 x16 (4x mode) and the 3 PCIe 3.0x1) run off the chipset, but have their own discreet connection, meaning that I could plug as many different expansions into them, without reducing their bandwidth?
 

Etherel15

Distinguished
Feb 2, 2011
10
0
18,510


Found a diagram of a similiar Z370 mobo for the Asus Strix. looking at this made a lot more sense I think (wish they made this diagrams easily available on every mobo specifications page!)
ASUS-ROG-STRIX-Z370E-Gaming-Block-Diagram-KitGuru.jpg


I think this diagram says that the Coffee Lake CPU has ownership of Slot1 and Slot2 PCIe3.0x16 slots (using x16 PCIe lanes), and then the PCH chipset, which has 24 lanes, divides its lanes up as 4 for the Slot 3 PCIe x4, 4 for the M.2_1 port, and then another 4 for EITHER the M.2_2 second ssd port, OR the 5th and 6th SATA connect, then another 1 lane for Sata 3 and 4, 1 lane for Realtek audio, 2 lanes for whatever ASM3142 U31G2 is, 1 lane for the Wifi module, 1 lane for USB3.0x6, 1 lane for USB 2.0 x6, 1 lane for the Intel_I219V, then 4 lanes for the 4 extra PCIe 1x lanes? I think this is finally making sense?

 

TJ Hooker

Titan
Ambassador
No, what @justin.m.beauvais said is incorrect.

The CPU provides 16 lanes. On Z series mobos these are able to be split into x8/x8 or x8/x4/x4 if required. I think the specs page for the Maximus X Hero may have an error, I don't think that particular board supports x8/x4/x4 from the CPU, based on what the manual says. So the top two physical x16 slots get CPU lanes, and are completely independent from the other x16 slot, the three x1 slots, and any other ports/connectors.

Extra slots/ports/connectors may share bandwidth, as described for the particular mobo. Populating an M.2 slot may disable some SATA ports, or a PCIe slot, etc. You'd have to read the manual for that mobo. Additionally, all the lanes provided by the chipset share a DMI 3.0 connection to the CPU, which provides equivalent bandwidth to a PCIe 3.0 x4 connection. So that is that max bandwidth you can get in total at one time to all devices connected through the chipset (SATA, M.2, PCIe, ethernet I believe, etc).
 
Solution

Etherel15

Distinguished
Feb 2, 2011
10
0
18,510


Ah, so the top 2 PCIe x16 slots are reserved for the CPU (for GPU uses really) but literally EVERYTHING else besides the RAM, has to go through the DMI, which has the speed (4.0GB/s) of a basic 4x lane, and this would include all M.2 (which annoying are rated at PCIe x4 themselves alone), all USB's, all SATA connections, all Ethernet or Wifi, all audio processing. And they would share the DMI's (quite honestly small looking) 4.0gbs? So in theory, the more things I would connect to empty PCIe lanes, the more potential for "traffic jams" of data I might get, potentially slowing things down if the collective data usage for all those things exceeds 4.0gb/s? Do I have that right?

 

TJ Hooker

Titan
Ambassador
Yes, that is correct. But remember that additional devices would only use up DMI bandwidth if they're in active use at a given moment. I'd say the only time you're really at risk of running into DMI bottlenecks is if you have dual NVMe SSDs that you're in the habit of accessing simultaneously with heavy sequential I/O. A SATA III port is never going to use more than ~0.6 GB/s, and gigabit Ethernet no more than ~0.11 GB/s. So you can have a lot of those 'low speed' peripherals running at once before you approach 4 GB/s.
 

Etherel15

Distinguished
Feb 2, 2011
10
0
18,510
Thanks TJ Hooker, that's answered my questions. But now I'm geeking out over how to best do NVME SSDs. Originally I was super excited to get multiple M.2 nvmes to RAID with an i7-8700k. BUT, it seems there's no way under the sun to provide enough lanes to have enough thruput for that CPU. CPU itself only has x16 lanes, and the 300-series chipset only has the DMI 3.0 throughput. So the options seems to be

A) potentially bottleneck the SSDs (now, or soon with the lack of apparent future proofing) and just slap in the M.2 into their slots going through the DMI

B) go with the X299 and i9 7980x, which seems to be outperformed by the 8700k, and cost many times as much, just to have access to extra lanes and VROC support, or

C) try to figure out how in the hell I could (by forcing my GPU to run in 8x mode) use the other 8x to somehow support a device that would hold and raid 2-4 nvme drives (which seems to be a vroc thing, thats only supported on x299 anyways) to get the maximum bandwidth for the SSD's and still be able to boot from it.

Am I just over-analyzing this, or is this a major problem/fear people are running into with Coffee Lake, that it feels completely non-futureproof (or even very current-proof) and what are they saying about it?
 

Etherel15

Distinguished
Feb 2, 2011
10
0
18,510
the Primary need will be because of Gaming while video streaming at 1080 and recording at 4k, followed by video editing the 4k footage. I've heard having the faster speed for the large files helps in fine detailed video scrubbing. I'd also hate for the technology in a year or 2 make RAIDing SSD's more practical, and not be able to.
 


The tech is already here, it is just a bit pricey. If you are worried about PCI-E lanes get a Threadripper system. They have 64 PCI-E lanes, which is more than any Intel CPU, excluding some dual chip Xeons. The Threadripper will have cores to spare for rendering video and streaming as well... heck, with 32 threads at your disposal you might be able to game while you render, though I'd say it would be better to not do that.

AMD also supports bootable NVMe drives in RAID. So if you wanted some really fast storage that wasn't Intel's xpoint you'd be all set. Sure, you wouldn't have those high i7 8700k framerates, but it would still be a fast enough system to do whatever you wanted to do. Also, the next gen Threadrippers should be out soon with the improvements from Zen+.

So, why worry about having enough PCI-E lanes when you can get yourself an absurd amount of them and a great editing machine all in one?