Understanding Skylake-X's PCIe Lanes

cmasupra

Distinguished
Mar 24, 2010
59
0
18,630
Hello, everyone. I'm trying to understand how exactly the PCIe lanes work on Intel's Skylake-X platform. I've done hours of research and can't seem to find answers to these questions.

1) The 7900X has 44 lanes on the CPU, but I read that the chipset has 24 lanes that are used for RAID controllers, network controllers, USB controllers, etc. Does that mean there are really 44+24 PCIe lanes that can be used, depending on how the motherboard is wired? Or does it mean that the chipset just has access to use up to 24 of the 44 lanes?

2) Assuming there are 44+24 PCIe lanes when using a 7900X (as opposed to just 44 lanes), is it technically a little slower (however unnoticeable it may be) to use the chipset's lanes instead of the CPU's lanes?

3) Where is the chipset? Is it on the CPU or the motherboard? I know that many years ago, Intel moved the "north bridge" onto the CPU (which is why the memory controller is now on the CPU), but is there still an X299 chipset on the motherboard?

4) What would a motherboard manufacturer wire to the chipset instead of the CPU normally? For example, SATA ports, NVMe M.2 slots, PCIe x4 slots, or USB 3.1 Gen 2 (since it appears Skylake-X only has Gen 1 support natively)?

The reason I'm asking is I need to understand if 28 lanes from the 7820X is enough for my personal use or if I would have to go with the 7900X for 44 lanes. Ignoring price, I'd rather have 8 faster cores instead of 10 cores, but I'll go with 10 if I need the 44 lanes. I just need to understand how the lanes are divided up in order to decide what's best for me.

Thanks in advance!
 
Solution
The CPU has 44 dedicated lanes, as you pointed out. Usually, the x16 slots(possibly M.2) will be wired directly to the CPU. Also, there are indeed 24 lanes in the X299 chipset. These are separate from the 44 in the CPU. These are split up for RAID(not including VROC) network, USB ect. The actual usable amount to the end user varies on the motherboard design. They are counted separately. However, the connection from the PCH(X299) to the CPU is a X4 PCI-E 3.0 DMI link. This is why you typically wouldn't see more than 1 M.2 connected directly to X299 chipset. It's also the likely reason VROC uses CPU's available lanes.

They shouldn't be slower, it's just the connection to the CPU is limited to x4 3.0 link from the X299 chipset...
The CPU has 44 dedicated lanes, as you pointed out. Usually, the x16 slots(possibly M.2) will be wired directly to the CPU. Also, there are indeed 24 lanes in the X299 chipset. These are separate from the 44 in the CPU. These are split up for RAID(not including VROC) network, USB ect. The actual usable amount to the end user varies on the motherboard design. They are counted separately. However, the connection from the PCH(X299) to the CPU is a X4 PCI-E 3.0 DMI link. This is why you typically wouldn't see more than 1 M.2 connected directly to X299 chipset. It's also the likely reason VROC uses CPU's available lanes.

They shouldn't be slower, it's just the connection to the CPU is limited to x4 3.0 link from the X299 chipset PCH. You still have tons of lanes available, just not ideal for M.2 RAID from X299 chipset.

The PCH is the X299 chipset in this case. This is what used to be the southbridge duties. A quick look at X299 chipset diagram may explain this better.

Everything you mention. All the USB, Ethernet, Wifi, SATA, U.2, audio, and maybe a single M.2 in X299's case. Usually x4 or x1 PCI-E slots are wired to the chipset, whereas the x16 are directly to the CPU.

I understand what you mean, as I'm using a 28-Lane 5820k in my X99 build. What are you planning in your build? Multiple GPUs or M.2 drives? I'm using 20 PCI-E lanes currently with my GPU and M.2. Technically, I should be able to run another M.2 (both 3.0 x4 speed)and still only use 24 lanes. The X99 chipset in my case only has 8 PCI-E 2.0 lanes in the chipset. I'd say if you're running a single GPU, and 1 or 2 M.2 drives, 28-Lane should be fine. SATA drives shouldn't matter. However, depending on board, you should read the fine print in the motherboard manual, as USB ports being enable/disabled can be affected if you have Wifi enabled for example. In my case, to gain use of 2 USB ports on the I/O panel, I had to disable SATA Express ports as they share bandwidth. I'm considering a similar build with same 7820X myself down the road, but curious about ThreadRipper as well.

 
Solution

cmasupra

Distinguished
Mar 24, 2010
59
0
18,630


I'm definitely planning to use 1 GPU, 1 M.2 NVMe SSD, and a few HDDs & SATA SSDs. Depending on how I end up planning my build, there could be any variation between:

  • ■ A PCI-E x1 NIC (an old Intel 9301 Gigabit CT Desktop Adapter that has proven to provide a more stable connection than the built-in NICs on my X58 and my X79 systems)
    ■ A PCI-E sound card (unlikely since I've been using USB headsets and built-in monitor speakers for many years, but I have really been wanting a nice speaker setup lately)
    ■ A second NVMe SSD in the future to add additional speedy storage for programs/games when the first one fills up
    ■ A second GPU in the future. I always plan my builds thinking "what if I decide to SLI", but I have yet to ever choose SLI over simply upgrading my video card.

To make sure I understand, all the motherboard's SATA ports, USB rear and front ports, and M.2 NVMe ports will use PCIe lanes from either the chipset or CPU, right? I read somewhere that all the Skylake-X CPUs support up to 3 NVMe M.2 drives, but I don't remember if that's from the CPU or the chipset (or it may depend on the motherboard), and I don't know if there's special circuitry for those drives in the chipset/CPU or if they just use x4 PCIe lanes.
 
Basically, everthing except M.2 and PCIE x16 slots will use the X299 24 lanes of GEN 3.0 lanes. On X299, its likely most boards M.2 slots will be directly using the 28 in the 7820x. Again, check the motherboard manual on specifics. For the bootable RAID 0 NVMe, the VROC option uses CPU lanes if thst interests you.

Your planned setup config is very similar to my X99. 1 GPU, 1 M.2 x4 3.0 NVMe, and 2 SATA SSDs.

The NIC and soundcard would certainly use the chipset PCI-E lanes if install in x1 or x4 slots. Only "possible" issue I can think of could be if you try adding SLI and a second M.2 same time that is tied to CPU. When I was running GTX 970 SLI with my M.2, both cards were forced to x8 mode with M.2 at full x4 speed. Same SLI config before I added M.2, cards would run at x16/x8. However, if an M.2 slot is available that runs off chipset lanes in your case, this shouldn't be an issue.
 

cmasupra

Distinguished
Mar 24, 2010
59
0
18,630
I think I'm understanding it now. I'm going to try to somewhat summarize it for myself and anyone else who stumbles upon this question.

Generally, motherboard manufacturers will connect 1-3 PCI-E x16 slots and 1-2 M.2 slots to the CPU. Everything else (USB, SATA, PCI-E x4, PCI-E x1, Audio, Ethernet, etc) will go through the chipset's 24 PCI-E lanes. That's just a general guide, though. Maybe more expensive motherboards would have more stuff connected to the CPU and will only use the chipset PCI-E lanes when the CPU runs out of lanes.

The limitation on the chipset's PCI-E lanes, however, is that it all connects to the CPU through a PCI-E x4 DMI connection. That will be the limiting factor in how much data can reach the CPU from the chipset (including all the USB ports, SATA ports, Ethernet, etc).

Question: Does the DMI link take up 4 PCI-E lanes from the CPU's 28 or 44, or is it a separate interface just for the chipset? I'd imagine it is separate, but it's worth asking.
 

cmasupra

Distinguished
Mar 24, 2010
59
0
18,630
Good point about Kaby Lake. Makes sense. Thanks for the help! It sounds like I should probably go with the 7820X unless I decide that I want a crazy number of add-in cards for some reason.
 
Should be a great choice. I'm leaning on that same CPU myself, but holding off a bit to see what Threadripper has to offer. I do have to give AMD credit for no limits on PCI-E connectivity with X399, at least 60 Lanes that is. Besides this, looks like AMD will give you more cores for the same $. However, personally I don't really need tons of PCI-E connectivity. Regarding CPU core counts, I'll have to decide if I want faster cores, or more of them. Given that X299 really OC's well(if you have the cooling and power capacity), I really would have to see some great reviews of Threadripper to go that route. Seems like it could come down to if I want 8 Core 7820X OC'd to 4.6+ Ghz, or 12ish Threadripper Cores(guessing around same price) to around 4Ghz.