Nvidia Tesla product manager Sumit Gupta said their GPUs speed up Windows 7 and Apple's "Snow Leopard" because they're cool like that.
Last week we reported that Nvidia released its OpenCL driver and software development kit (SDK) for those enrolled in the OpenCL Early Access Program. The company released the driver early to receive feedback before distributing the beta. Nvidia said that the new driver would run on the CUDA architecture, enabling it to take advantage of the GPU's parallel computing. However, Sumit Gupta, product manager for Nvidia's Tesla products, went into more detail in an interview Friday, explaining how the Nvidia GPUs will accelerate software in Windows 7 and Apple's OS X Snow Leopard operating systems.
"The really interesting thing about OpenCL and DirectX is that OpenCL is going to form part of the Apple operating system (Snow Leopard) and DirectX (version 11) will form part of Windows 7," Gupta told CNET. "And what that essentially means to consumers is, if your laptop has an Nvidia GPU or ATI (AMD) GPU, it will run the operating system faster because the operating system will essentially see two processors in the system. For the first time, the operating system is going to see the GPU both as a graphics chip and as a compute engine," he said. Additionally, consumers using Windows 7 will see the GPU as a CPU in Task Manager.
But aren't GPUs meant for rendering graphics? Primarily, yes, however they've taken on a new responsibility within recent years as technology has improved, enabling them to help tackle more general computing processes usually handled by the system's main processor. Consider the GPU as a "helper" now, offering up its higher-end processing areas to compute a portion of a task carried out by the CPU, allowing both to work "in concert" rather than separate entities. This parallel processing will actually speed up both operating systems, however the benefit isn't a holy grail provided by Nvidia alone: ATI GPUs also provide a general purpose environment as well.
According to AMD, its ATI Stream technology is defined as a set of advanced hardware and software technologies that enable AMD graphics processors to work along with the system's CPU, working in parallel, to accelerate many applications beyond just graphics; Nvidia's CUDA works in the same manner. In addition, Nvidia's CUDA is compatible with many computational interfaces including PhysX, Java, Python, OpenCL, and DirectX Compute; ATI's Stream is compatible with DirectX, Havok's physics middleware, and OpenCL.
So the question is this: if GPUs are taking on the role of general processing (in addition to graphics processing), are CPU's on their way out? No. "If you're running an unpredictable task, the CPU is the jack of all trades," Gupta said. "It is really good at these unpredictable tasks. The GPU is a master of one task. And that is a highly parallel task."
He also goes on to describe a scenario of how the CPU and GPU would work together. When a consumer launches Google Picasa, the program would run entirely on the CPU. However, once the consumer loads up an image and applies a filter, the filter aspect should be run on the GPU. "The CPU is one aspect but not necessarily the most important aspect anymore," he said.
The bad news for AMD and Nvidia is that Intel is taking notice to the general computing environment, and plans to release a graphics chip that will handle parallel computing as well. "Since the graphics pipeline is becoming more and more programmable, the graphics workload is making its way to be more and more suited to general purpose computing--something the Intel Architecture excels at and Larrabee will feature," an Intel spokesperson told CNET.
There's quite a lot to look forward to with the release of Windows 7 and Apple's OS X Snow Leopard, especially if the parallel computing indeed does speed up the processing of applications.