PCIe lanes, why you so mysterious? How many do I need?

TehPenguin

Honorable
May 12, 2016
711
0
11,060
What it comes down to is: do I have to spend the extra €200 for my system to run smoothly or will I be fine with "just" 28 lanes. 5820k/6800k vs 5930k

Motherboard: Asus ROG Strix X99
Stuff I want to have connected:
2x HDD
2-3x SATA III SSD
1x M.2 SSD
1-2x GPU's
(hope I am not missing anything else using PCIe)

I've read there hasn't been a GPU yet to max out the x8 lane so that shouldn't be a problem. What I am wondering though is if everything will work fine with 1 GPU and starts slowing down when I decide to add the 2nd GPU later on.
 
Solution
No matter how many video cards you have installed, they will never use anything other than 16 lanes. 1@16x, 2@8x, 3@1x8 + 2x4, 4@4x. All of these PCIe lanes are controlled directly by the CPU in modern CPU's.

Additional PCIe lanes are controlled by the PCH. In X99, the motherboard designers decide in advance which types of devices can be used, and how many of each of those tnere can be. It is possible for a motherboard designer to put more connectors for devices than there are PCIe ports to support them all. In that case, they have switches that disable certain ports if the lanes for that device are being used by another device.

To see if they decided to go the route where they disable devices when certain ports are used, go to a...

TehPenguin

Honorable
May 12, 2016
711
0
11,060


Yes, I know that. My concern are the other peripherals I will have connected.
 
28 lanes really ought to be sufficient for your needs. Getting more PCI-E lanes isn't really needed unless you wanted to do three way or four way SLI or wanted to get multiple M.2 or PCI-E based SSDs while also running multiple GPUs. So far there have been no GPUs that have been held back by 8 lanes of PCI-E 3.0, so that's not really an issue except maybe in the really long term eg. a few years from now if you wanted to do high end SLI then.
 

TehPenguin

Honorable
May 12, 2016
711
0
11,060
@Supernova1138 but doesn't SATA III use PCIe lanes? I've read somewhere that on a z170 chipset you disable your SATA ports when using M.2

@InvalidError I am still thinking if putting the SSDs in RAID is a good idea or not.

@synphul yes, thank you. As I said, I am not concerned about SLI unless the other peripherals would be affected.

Is there a way to calculate all this? I sadly understand very little about how motherboards work and how the circuits are interconnected.
 
No matter how many video cards you have installed, they will never use anything other than 16 lanes. 1@16x, 2@8x, 3@1x8 + 2x4, 4@4x. All of these PCIe lanes are controlled directly by the CPU in modern CPU's.

Additional PCIe lanes are controlled by the PCH. In X99, the motherboard designers decide in advance which types of devices can be used, and how many of each of those tnere can be. It is possible for a motherboard designer to put more connectors for devices than there are PCIe ports to support them all. In that case, they have switches that disable certain ports if the lanes for that device are being used by another device.

To see if they decided to go the route where they disable devices when certain ports are used, go to a motherboards web site, and look at the specs. Sometimes you will see something like "If port M.2_2 is used, PCI_Express_2 will be disabled". There can be many of these, a few of these, or none of these.

In most cases, unless you want to have some hugely crazy setup, even the lowest end X99 CPU will be just fine. And you probably already know if you need those 28 PCIe lanes, or the 40 PCIe lanes.
 
Solution

InvalidError

Titan
Moderator

To saturate the 40Gbps DMI3 using SATA SSDs, you would need a seven drives RAID assuming other variables do not cause the array's performance scaling to taper off first.
 

TehPenguin

Honorable
May 12, 2016
711
0
11,060


Huh, it makes me wonder even more about the 40 lane CPU's. Apart from 3+ CF/SLI, naturally.

Regarding MoBo specs, all I found is this:
The PCIEX4_1, PCIEX1_2, and USB3.1_EC1EA2 connectors share the same bandwidth. By default, the PCIEX4_1 slot and PCIEX1_2 slot automatically run at x1 mode with USB3.1_EC1 and USB3.1_EA2 enabled for best resource optimization.

and cannot make any sense of it.

@InvalidError Thanks!
 
With these new Nvidia cards, (GTX1080 and 1070), SLI of more than 2 cards is dead.
AMD has not yet commented on this matter to my knowledge.

PCIEX4_1, PCIEX1_2, and USB3.1_EC1EA2 are sharing a data pipe. PCI4 and PCIe1 are most likely 1x PCIe slots. that part is fairly normal. As there are only 16 PCIe slots controlled by the CPU, and those are always assigned to the video card slots (or any device(s) that are in those slots. Slot 1 is 1x slot closest to the CPU. Slot is normally reserved for the first video card. Slots 3 nand 4 are often a combination of a 1x slot and a 4x slot. And slot 5 is oftn the second video card slot.

The fun part is deciphering what port USB3.1_EC1EA2... Its clearly a USB3.1 port. but which one might be tricky tracking down. Your motherboard manual might well show you which one it is.