Intel: GPUs Only 14x Faster Than CPUs

A recent paper written by Intel and presented to the International Symposium on Computer Architecture in France claims that Nvidia's GeForce GTX 280 GPU is only 14x faster than its Core i7 960 processor. The paper attempts to debunk claims made by Nvidia developers who saw a 100x performance improvement in some application kernels using CUDA when compared to running them on a CPU.

But is that any surprise? GPUs like the Nvidia GTX 280 have 240 processing core--the average CPU only has six cores. However it's uncertain how Intel came to its "14x" conclusion, as the findings refer to a set of unknown benchmarks--Nvidia even pointed out that they weren't specified in the paper.

"[But] it's actually unclear...what codes were run and how they were compared between the GPU and CPU," said Nvidia spokesperson Andy Keane. "[Still], it wouldn't be the first time the industry has seen Intel using these types of claims with benchmarks."

Playing on the paper's title--Debunking the 100x GPU vs CPU Myth--Keane said that the real myth is that multi-core CPUs are easy for any developer to use and see performance improvements. "In contrast, [our] CUDA parallel computing architecture is a little over 3 years old and already hundreds of consumer, professional and scientific applications are seeing speedups ranging from 10 to 100x using Nvidia GPUs."

Naturally Intel retaliated, saying that Nvidia had taken one small part of the paper out of context and even added that GPU kernel performance is often exaggerated.

"General purpose processors such as the Intel Core i7 or the Intel Xeon are the best choice for the vast majority of applications, be they for the client, general or HPC market segments," said an Intel spokesperson. "This is because of the well-known Intel Architecture programming model, mature tools for software development and more robust performance across a wide range of workloads--not just certain application kernels."

To read the full Intel vs. Nvidia dispute, head here.

  • joytech22
    Screw intel, come back when you have a GPU which actually CAN compete in the high-performance graphics market!
    Better yet, come back with a CPU that performs as well as a GPU.
  • cookoy
    Intel to nVidia: "Yup you're ahead only 14x. That's why we're not licensing x86 codes to you. Or else you'll clobber us really bad!"
  • rhino13
    Just hold still a second nVidia. In a couple generations we'll have caught up with you and then we can have a real throwdown.
  • aletoil
    Man, why is Intel always player hatin'? You cant turn a hooker(in this case, cuda) into a housewife(in this case, the possibly borked "benchmarks" that will not be specified) and then tell her she is doing a bad job.
  • ddkshah
    yup in a couple of generations you will go from 6 cores to 16 cores max, while nvda will go from 512 cores to more than 2048 cores. LOL good luck Intel cuz the gpu's are the future of the computer.
  • ravewulf
    A CUDA baded AES program listed at Nvidia's site is roughly 14x faster than a CPU, but scientific stuff does range up in the hundreds. It really depends on what the algorithm is and how well it was implemented in parallel.

    You can check out Nvidia's listed speed ups at this link and sort by speed up
  • ivan_chess
    "-the average CPU only has six cores"

    Average? I'm pretty sure most people don't have six cores.
  • matt87_50
    that's like comparing apples to oranges, there are sooo many different types of apps, some will work fantastic on the GPU, some may work BETTER on the CPU. and another thing... ONLY 14x? are gpus 14x more expensive than the 960? whats that? they are CHEAPER?? I think even at 14x they still represent good value....

    this just seems like intel bitching after they failed to defeat GPUs with larrabee... we already know all this stuff intel. we know coding for CPU is easier, and we know GPU is only suited to some tasks. stop bitching.
  • applegetsmelaid
    A measly 14x faster - There's no significance in that Nvidia.
  • sykozis
    ddkshahyup in a couple of generations you will go from 6 cores to 16 cores max, while nvda will go from 512 cores to more than 2048 cores. LOL good luck Intel cuz the gpu's are the future of the computer.
    nVidia hasn't even managed to get 512 cores working within the PCIe power spec yet... The rate nVidia is going....we'll all need a dedicated 1kW PSU just to run their graphics cards. Intel's most power hungry consumer processor still has a TDP of 130watts....nVidia's most power hungry consumer graphics card has a TDP of what? 300watts??? Honestly, there's no accurate way to perform benchmark comparisons between GPU and CPU. During any attempt, the CPU is running OS, chipset drivers, graphics driver, etc....the GPU is running what? The benchmark app... The only way to even get close to an accurate comparison would be to set processor affinity on the CPU to give the benchmark a single dedicated core and then do the same on the GPU....but that would skew nVidia's highly inflated results. Also...where are these "hundreds" of CUDA based consumer apps that nVidia keeps talking about? I haven't seen any yet....and only a handfull of consumer apps contain any features that make even the smallest use of CUDA...