Sign in with
Sign up | Sign in

Intel: GPUs Only 14x Faster Than CPUs

By - Source: Tom's Hardware US | B 43 comments

While Nvidia developers see a 100x speed increase, Intel only sees 14x with some kernels using CUDA.

A recent paper written by Intel and presented to the International Symposium on Computer Architecture in France claims that Nvidia's GeForce GTX 280 GPU is only 14x faster than its Core i7 960 processor. The paper attempts to debunk claims made by Nvidia developers who saw a 100x performance improvement in some application kernels using CUDA when compared to running them on a CPU.

But is that any surprise? GPUs like the Nvidia GTX 280 have 240 processing core--the average CPU only has six cores. However it's uncertain how Intel came to its "14x" conclusion, as the findings refer to a set of unknown benchmarks--Nvidia even pointed out that they weren't specified in the paper.

"[But] it's actually unclear...what codes were run and how they were compared between the GPU and CPU," said Nvidia spokesperson Andy Keane. "[Still], it wouldn't be the first time the industry has seen Intel using these types of claims with benchmarks."

Playing on the paper's title--Debunking the 100x GPU vs CPU Myth--Keane said that the real myth is that multi-core CPUs are easy for any developer to use and see performance improvements. "In contrast, [our] CUDA parallel computing architecture is a little over 3 years old and already hundreds of consumer, professional and scientific applications are seeing speedups ranging from 10 to 100x using Nvidia GPUs."



Naturally Intel retaliated, saying that Nvidia had taken one small part of the paper out of context and even added that GPU kernel performance is often exaggerated.

"General purpose processors such as the Intel Core i7 or the Intel Xeon are the best choice for the vast majority of applications, be they for the client, general or HPC market segments," said an Intel spokesperson. "This is because of the well-known Intel Architecture programming model, mature tools for software development and more robust performance across a wide range of workloads--not just certain application kernels."

To read the full Intel vs. Nvidia dispute, head here.

Display 43 Comments.
This thread is closed for comments
Top Comments
  • 25 Hide
    ivan_chess , June 24, 2010 11:31 PM
    "-the average CPU only has six cores"

    Average? I'm pretty sure most people don't have six cores.
  • 21 Hide
    cookoy , June 24, 2010 10:41 PM
    Intel to nVidia: "Yup you're ahead only 14x. That's why we're not licensing x86 codes to you. Or else you'll clobber us really bad!"
  • 16 Hide
    joytech22 , June 24, 2010 10:26 PM
    Screw intel, come back when you have a GPU which actually CAN compete in the high-performance graphics market!
    Better yet, come back with a CPU that performs as well as a GPU.
Other Comments
  • 16 Hide
    joytech22 , June 24, 2010 10:26 PM
    Screw intel, come back when you have a GPU which actually CAN compete in the high-performance graphics market!
    Better yet, come back with a CPU that performs as well as a GPU.
  • 21 Hide
    cookoy , June 24, 2010 10:41 PM
    Intel to nVidia: "Yup you're ahead only 14x. That's why we're not licensing x86 codes to you. Or else you'll clobber us really bad!"
  • -4 Hide
    rhino13 , June 24, 2010 10:46 PM
    Just hold still a second nVidia. In a couple generations we'll have caught up with you and then we can have a real throwdown.
  • 6 Hide
    aletoil , June 24, 2010 11:14 PM
    Man, why is Intel always player hatin'? You cant turn a hooker(in this case, cuda) into a housewife(in this case, the possibly borked "benchmarks" that will not be specified) and then tell her she is doing a bad job.
  • 7 Hide
    ddkshah , June 24, 2010 11:26 PM
    yup in a couple of generations you will go from 6 cores to 16 cores max, while nvda will go from 512 cores to more than 2048 cores. LOL good luck Intel cuz the gpu's are the future of the computer.
  • 2 Hide
    ravewulf , June 24, 2010 11:28 PM
    A CUDA baded AES program listed at Nvidia's site is roughly 14x faster than a CPU, but scientific stuff does range up in the hundreds. It really depends on what the algorithm is and how well it was implemented in parallel.

    You can check out Nvidia's listed speed ups at this link and sort by speed up
    http://www.nvidia.com/object/cuda_apps_flash_new.html
  • 25 Hide
    ivan_chess , June 24, 2010 11:31 PM
    "-the average CPU only has six cores"

    Average? I'm pretty sure most people don't have six cores.
  • 11 Hide
    matt87_50 , June 24, 2010 11:37 PM
    that's like comparing apples to oranges, there are sooo many different types of apps, some will work fantastic on the GPU, some may work BETTER on the CPU. and another thing... ONLY 14x? are gpus 14x more expensive than the 960? whats that? they are CHEAPER?? I think even at 14x they still represent good value....

    this just seems like intel bitching after they failed to defeat GPUs with larrabee... we already know all this stuff intel. we know coding for CPU is easier, and we know GPU is only suited to some tasks. stop bitching.
  • 2 Hide
    applegetsmelaid , June 24, 2010 11:47 PM
    A measly 14x faster - There's no significance in that Nvidia.
  • 5 Hide
    sykozis , June 24, 2010 11:48 PM
    ddkshahyup in a couple of generations you will go from 6 cores to 16 cores max, while nvda will go from 512 cores to more than 2048 cores. LOL good luck Intel cuz the gpu's are the future of the computer.


    nVidia hasn't even managed to get 512 cores working within the PCIe power spec yet... The rate nVidia is going....we'll all need a dedicated 1kW PSU just to run their graphics cards. Intel's most power hungry consumer processor still has a TDP of 130watts....nVidia's most power hungry consumer graphics card has a TDP of what? 300watts??? Honestly, there's no accurate way to perform benchmark comparisons between GPU and CPU. During any attempt, the CPU is running OS, chipset drivers, graphics driver, etc....the GPU is running what? The benchmark app... The only way to even get close to an accurate comparison would be to set processor affinity on the CPU to give the benchmark a single dedicated core and then do the same on the GPU....but that would skew nVidia's highly inflated results. Also...where are these "hundreds" of CUDA based consumer apps that nVidia keeps talking about? I haven't seen any yet....and only a handfull of consumer apps contain any features that make even the smallest use of CUDA...
  • 4 Hide
    climber , June 24, 2010 11:48 PM
    stingstangGPUs don't do the same things as CPUs. That's why we have the 2 different hardwares. GPUs handle only graphics and some with physics. CPUs do EVERYTHING else, including sort out the information thrown at them by the GPU. It isn't fair to compare these two. That's like LAN cards and sound cards.

    stingstang,
    GPU's especially Nvidia, GPU's at this point to more than graphics and some physics applications. There's hundreds of scientific applications that see massive speedups vs. x86 CPU's. Manifold GIS has speedups of nearly 300x processing digital elevation models (think x,y,z raster data i.e. grids). It's always been the case that not all code can easily be parallelized nor when it is, is it necessarily faster. But each passing year, parallel GPU based computing sees massive improvements in performance, just like GPU's for gaming vs. Intel integrated graphics.
  • 4 Hide
    Anonymous , June 24, 2010 11:55 PM
    Intel says it's 14x, NVIDIA 100x.

    I'm guessing the real numbers are somewhere in between the two, considering the natural bias of both companies.
  • -3 Hide
    gpace , June 25, 2010 12:02 AM
    Things that require lots of relatively simple processes run well on a GPU, while things that have complex processes run better on a CPU. At least taht seems to make sense to me.
  • 0 Hide
    eugenester , June 25, 2010 12:07 AM
    Meh, CUDA is a parallel system and our current processors are serial-based.
  • 6 Hide
    Anonymous , June 25, 2010 12:15 AM
    If CUDA was so wonderful, we'd be inundated with apps that make computing so much faster, where are they all? It's been around for 3 years and we have what? A broken video encoding app and a tiny bit of support from Adobe and that is about it.
    Where are the CUDA versions of Winrar, of Bluray encoding, of, well, anything useful?
  • 1 Hide
    xyzionz , June 25, 2010 12:26 AM
    LOL only 14x..."ONLY" 14X
  • 14 Hide
    ta152h , June 25, 2010 12:45 AM
    I read this stuff and wonder if people have any real understanding about what they're posting.

    You can't compare a general purpose CPU with a GPU and say on optimized workloads, the GPU is a lot faster. Duh!

    A GPU is child's play to make compared to a CPU like the Nehalem, or even the AMD stuff. They are much less complex, and just a lot of the same thing over and over again. If you've got a workload that can use that, then it's going to be fast, but most workloads aren't so easily parallelized.

    CPUs are much smarter, and much more useful for most people. They schedule much better, they predict branches much better, they run single threads a Hell of a lot better because of these things.

    If Intel didn't need to worry about thread level parallelism, they could save a ton of transistors and use them for more parallel loads. And you all that discredit them would be whining because the computer was a lot slower because of it. Single threaded performance is still the most important thing for most apps, and this becomes more so as they add cores and give it more multi-threaded power. Some apps can use a really simple setup like a GPU well, but, if they can't, you'd suffer badly with it. Not only because the single threaded performance is so low on a per cycle basis, but because the clock speeds are horrible too.

    They are both better at what they are made for. But, that should be obvious without even having to say it.
  • -5 Hide
    captainnemojr , June 25, 2010 1:29 AM
    14X? If every new generation was 2X as fast as the previous one, and took 2 years to come out..that's like 8 years just to catch up!

    2 years = 2X, 4 years = 4X, 6 years = 8X, 8 years = 16X
  • 12 Hide
    JonnyDough , June 25, 2010 2:14 AM
    GPUs are only parallel processing. CPUs have far more complex instruction sets, capable of handling numerous types of codes. The reason CPUs can't run as quickly as GPUs is because they're much more complex.

    Its like expecting a multi-purpose Swiss Army Knife with a saw blade to cut wood as quickly as a chainsaw. It might be 14x slower, but it can also do a lot of things a chainsaw can't do. I find the toothpick tool especially handy.
  • 2 Hide
    marraco , June 25, 2010 3:03 AM
    Nvidia: We charge 500$ for a GPU capable of outperform CPU 100X in scientific applications, and the user can connect 3 or 4 GPU on a desktop computer.
    Intel: We charge 1000$ for a GPU that is ONLY 14X slower than a GPU, and you pay extra for a 2 socket motherboard, and a windows license for each CPU.
    apple: We charge 2000$ for the same CPU. You can connect only one GPU to it, without CUDA, and it works slower than under windows. ¡But it's a Ferrari!!!! ¡For scientific applications! Proof, look at our magazines photos!!!
Display more comments