After an unfortunate series of untimely delays, the folks behind PCI Express 3.0 believe they've worked out the kinks that have kept next-generation connectivity from achieving backwards compatibility with PCIe 2.0. We take a look at the tech to come.
Moore's Law states that the amount of transistors which can be placed on a chip will double every two years. This has often been misinterpreted as a statement that processor speed will double every two years. It’s a misinterpretation that the computer-buying general public has turned into an expectation of exponentially-scaling PC performance.
However, as you’ve undoubtedly noticed, shipping processors have been stuck between 3 GHz and 4 GHz for about six years now. So, the computer industry has had to find other ways to make data move faster. One of the most important of those ways has been maintaining balance between platform components using PCI Express, the open standard technology that enables high-speed graphics cards, expansion cards, and other onboard computer components. It’s at least arguable that PCI Express is as important to scalable performance as multi-core processors. Although dual-core, quad-core, and hexa-core CPUs can only be adequately used by applications optimized for threading, every program installed on your machine can and will touch components attached via PCI Express in some way.
Many industry observers originally expected motherboards and chipsets based on next-generation PCI Express 3.0 to appear in the first quarter of 2010. Unfortunately, problems with backward compatibility delayed the launch of PCI Express 3.0, and as we enter the second half of this year, we’ve been left waiting for official word on the new standard's release.
Finally, following a conference call with PCI-SIG (the Special Interest Group that oversees the PCI and PCI Express standards), we at last have some answers.

Can't wait to see PCIE 3.0, native USB3/SATA3, DDR4, quad channel and faster&cheaper SSD next year.
In addition, I hate unreasonably priced buggy HDMI and would also like to see the Ethernet cable(cheap, fast and exceptional) based monitors as soon as possible.
http://www.tomshardware.com/news/ethernet-cable-hdmi-displayport-hdbaset,10784.html
One more tech that I can't wait to see: http://www.tomshardware.com/news/silicon-photonics-laser-light-beams,10961.html
WOW, so much new techs to be expected next year!
And if you think the Core i3/5/7 desktop naming is confusing now, wait till Intel starts releasing all their Sandy Bridge Server chips. Its going to be even worse I think.
And while we're talking about futures, 32GB DIMMs will be out for the server market most likely before the end of this year. If 3D Stacking and Load Reducing DIMMs remain on track, we could see 128GB on a single DIMM around 2013, which is when DDR4 is slated to come out as well.
It's nice to see the backwards compatibility and cost be key factors in the decision making. Especially considering that devices won't be able to saturate it for many years to come.
Even the graphics cards are getting bigger!
I believe that he meant gfx size per performance.
Still, the largest cards today are a bit too large! Aren't they!
I HATE YOU TECHNOLOGY!
lol
And thanks to NVidia, hotter.
A series of unfortunate events? That sounds familiar...
Perhaps you could have explained why CUDA would benefit from this, or what type of apps that use it could. Fusion makes no sense to me, since the GPU and CPU will not be connected using PCI-Express, and be on the same die. Maybe you could explain why these things are going to benefit.
Also, according to the visual, latency will be lowered. Bandwidth is essentially irrelevant in many situations, since it's only rarely fully used, but latency could make itself felt in virtually anything.
You also could have included the extra power use this extra speed will take. It almost certainly will, all other things being equal. That's a huge consideration. If I have to add, say 15 watts to my motherboard, is it worth it for a technology that might not be relevant for many situations, in the relative near term? If it's one or two watts, it's a no brainer, but, if it's a lot higher (which I suspect it might be), people need to really ask if they need this technology, or if it's better to wait until the next purchase, when it might have more value.
With a bit of help from ATi of course.