PCI Express 2.0 provides several advantages to hardware manufacturers, which are difficult to depict in a technical analysis like this one. Power requirements can be software controlled by adding or reducing the number of PCI Express lanes and the link speed used. In addition, higher graphics card power requirements can now be satisfied. At the same time, PCIe 2.0 is complete compatible with prior hardware revisions, and it does not increase cost for consumers, as the transition happens smoothly between one graphics chip generation and the next. From this standpoint, we clearly recommend PCIe 2.0 to anyone, because it has no disadvantages.
But is PCIe 2.0 really necessary yet? As long as a graphics solution can operate with data that is stored within its local video frame buffer memory, both the reasonably mainstream Radeon HD 3850 and the hardcore GeForce 9900 GX2 will operate close to their maximum performance, even if the PCI Express link width is limited to x8 or x4. Once larger textures need to be accessed, as is the case in Crysis or Microsoft’s Flight Simulator X, interface bandwidth becomes a crucial element. Any link width below x16 will noticeably limit these games’ playability.
The answer, thus, has to be "yes": you want maximum bandwidth, and PCI Express 2.0, for all sorts of sophisticated 3D applications. Benchmarks such as Futuremark’s 3DMark06, PCMark Vantage, Prey or Quake provide proof from the other end of the spectrum, though: they can fit all the graphics data into the 512 MB (Radeon HD 3850) or 2x 512 MB (GeForce 9800 GX2) frame buffers.