We are entering an age of the desktop supercomputer. We have access to massively parallel graphics processors, along with power supplies and motherboards that can support as many as four cards at the same time. Nvidia’s CUDA technology is transforming the graphics card into a tool for programmers working not only with games, but with science and engineering. The programming interface has already played an instrumental role in solutions for enterprises as diverse as medical imaging, mathematics, and oil and gas exploration.
I asked OpenGL programmer Terry Welsh, from Really Slick Screensavers, for his thoughts on PCI Express 3.0 and GPU processing. Terry told me “PCI Express was a great boost, and I'm happy with them doubling the bandwidth anytime they want, as with 3.0. However, for the types of projects I work on, I don't expect to see any difference from it. I do a lot of flight-sim stuff at work, but that's mostly bound by memory and disk I/O; the graphics bus isn't a bottleneck at all. I can easily see [PCI Express 3.0] being a big boost, though, for GPU compute applications, and people doing scientific viz on large datasets.”
The ability to double transfer speed when working with mathematics-intensive workloads is sure to enhance both CUDA and Fusion development. This is one of the most promising areas for the upcoming PCI Express 3.0 interface.