Our own experience with testing graphics throughput, after the introduction of PCI Express 2.0 and as recently as earlier this year, have revealed that it’s very difficult to saturate the x16 bandwidth currently available on PCI Express 2.1 motherboards. It really takes a multi-GPU configuration or a very high-end single-GPU card to distinguish the difference between a x8 and a x16 connection.
We asked both AMD and Nvidia to comment on the need for PCI Express 3.0 as an enabler for the next generation of graphics card performance. An AMD spokesperson replied that they weren't able to comment at this time.
A spokesperson from Nvidia was a little more forthcoming: “Nvidia is a key contributor to the industry’s development of PCI Express 3.0, which is expected to have twice the data throughput of the current generation (2.0). Whenever there is a major increase in bandwidth like that, applications emerge that take advantage of it. This will benefit consumers and professionals with increased graphics and computing performance from notebooks, desktops, workstations, and servers that have a GPU”.
Perhaps the key phrase here is “applications emerge that take advantage of it.” Nothing in the world of graphics is getting smaller. Displays are getting larger, high definition is replacing standard definition, the textures used in games are becoming even more detailed and intricate. We do not feel that the need exists today for the latest and greatest graphics cards to sport 16-lane PCI Express 3.0 interfaces. But enthusiasts have seen the same story again and again: the progression of technology paves the way for new ways to take advantage of fatter pipes. Perhaps we'll see a surge of applications that make GPU-based computing more mainstream. Or maybe the performance hit experienced when you run out of frame buffer and swap to system memory will be diminished on more mainstream boards. Either way, we have to look forward to the innovation that PCI Express 3.0 promises to AMD and Nvidia.