Huh? Looking at your charts it shows a 1% difference between x16 and x8. How is that a "serious performance" hit? Your link shows there is nearly no difference between them. Unless you drop down to x4. .
Hi I didn't say "serious performance" I did say "Yes it will", no more.
Of course it depends of the graphics, what type, what graphics engine, resolution, textures etc. and so fort.
all of this is just a brief comparison, but in general it will not be as big as one think, on the other hand, well it could be very much. It all depends on the graphics card, the game settings and its graphics engine.
Quote from the site:
Each game has different requirements for PCI-Express bandwidth, depending on the game engine design. Alan Wake is most dependent on a fast bus interface, losing up to 70% framerate, whereas Aliens vs. Predator handles bandwidth starvation the best, losing only 10% in worst case (1280x800 GTX 680).
Contrary to intuition, the driving factor for PCI-Express bus width and speed for most games is framerate, not resolution. Our benchmarks conclusively show that with higher resolution, the performance difference between PCIe configurations shrinks. This is because the bus transfers a fairly constant amount of scene and texture data - for each frame. The final rendered image never moves across the bus, except in render engines that do post-processing on the CPU, for example Alan Wake. Even in that case, the reduction in FPS from higher resolution is bigger than the increase in pixel data.
NVIDIA's GeForce GTX 680 suffers a relatively bigger performance hit from a slower PCI-Express interface than AMD's HD 7970. Going from x16 3.0 to x4 1.1 causes the HD 7970 to lose 14%,
GTX 680 loses 27% real-life performance for the same transition. A reasonably accurate rule of thumb is that GTX 680 loses twice the percentage from slower PCI-E speeds, compared to HD 7970.
Best regards from Sweden