Sturmgewehr_44 :
I've watched this video by Linus:
https://www.youtube.com/watch?v=rctaLgK5stA
It is explained pretty well, but I'm still confused. How do PCIE lanes work exactly?
My 3770k has a mere 16 lanes. That is enough for just my 980...allegedly. However, I'm also using a Network card in a 1x slot as well. I don't notice any performance loss or lack of bandwidth. Are the amount of lanes just relevant for the 16x slot, or do the amount of devices in each PCIE-E slot add up to how many lanes the CPU supports?
PCIe is rather complicated so I'll try to focus on answering just your question without getting into too much technical detail.
PCIe devices connect to the system through what's known as a
root port, and each root port communicates with the system through a
root complex. Each root complex controls one or more root ports.
Intel's CPUs that utilize the LGA-1156, LGA-1155, and LGA-1150 sockets have one 16x root port on the CPU. Intel's CPUs that utilize the LGA-2011 socket have three root ports on the CPU, two 16x and one 8x. These ports can be split into multiple sub ports in the following fashion.
Nehalem (excluding x58 based microprocessors), Westmere (excuding x58 based microprocessors, and Sandybridge: 16/0, or 8/8
Ivybridge, Haswell: 16/0/0, 8/8/0. 8/4/4
Sandybridge-E, Ivybridge-E, Haswell-E: 16/16/8/0/0/0/0/0/0/0 all the way down to 4/4/4/4/4/4/4/4/4/4 for up to 10 devices connected at once.
Each sub port can be individual down-negotiated, so it would be possible to configure an Ivybridge micrprocessor as 8/4/1 or 4/1/4. However, it is not possible to connect more than three devices to an Ivybridge or Haswell CPU without using an external PCIe switch, and it is not possible to connect more than two devices to Nehalem, Westmere, or Sandybridge.
In addition to the PCIe lanes exposed from the CPU itself, there is an additional 8x port on the PCH chipset. This port can be configured as 8/0/0/0/0/0/0/0 all the way down to 1/1/1/1/1/1/1/1 which is very ideal for connecting numerous low bandwidth add-in devices.
Most motherboards that include 4x and 1x slots connect those slots to these ports on the PCH, often with shared bandwidth. This enables the use of either a single 4x port in 4x mode, or a 4x port in 1x mode in addition to three 1x ports.
One of the beautiful aspects of PCIe is that it automatically negotiates link width. It is possible to insert a native 1x device into a full-speed 16x slot and have it run at 1x. The card will not be well secured but it will work. Furthermore, it is possible to cut the connector on a native 16x device (don't do this) and insert it into a 1x slot. If power requirements are satisfied, it will work at 1x link width. Similarly, placing tape over some of the connectors on a 16x device can reduce it to 8x, 4x, or 1x as appropriate.
Almost all graphics cards are native 16x devices but this is really just to take advantage of the 75 watts of power that 16x slots are required to deliver when requested. The bandwidth provided by a 16x slot is almost always sufficient; performance would only be impacted by utilizing a modern graphics card on a very old motherboard. For example, running a GTX 980 (native 16x PCIe 3.0) at 4x PCIe 1.1 may cause problems, but running it at 8x PCIe 3.0 will have a negligible impact on performance.