Sign in with
Sign up | Sign in

Nvidia Announces CUDA 4.1 with LLVM Compiler

By - Source: Nvidia | B 16 comments

Nvidia just released CUDA 4.1 Toolkit, which integrates, for the first time, the company's LLVM (Low Level Virtual Machine) compiler.

According to Nvidia, CUDA-based apps will gain about 10 percent performance as a result.

CUDA 4.1 also includes more than 1,000 new imaging and signal processing functions in the Performance Primitives (NPP) library, which now covers more than 3,200 functions in total. Nvidia claims that the NPP delivers 40 percent greater performance than Intel's IPP.

The Visual Profiler has been redesigned and now offers an automated expert system to give that provides step-by-step instructions to fine-tune CUDA code. Additionally, the new CUDA toolkit integrates version 2.1 of Parallel Nsight, a collection GPU developer tools for Visual Studio.

CUDA 4.1 can be downloaded from Nvidia's website.

CUDA 4.1

Discuss
Ask a Category Expert

Create a new thread in the News comments forum about this subject

Example: Notebook, Android, SSD hard drive

This thread is closed for comments
Top Comments
  • 17 Hide
    nikorr , January 27, 2012 11:32 PM
    Good for folding : )
  • 10 Hide
    lordstormdragon , January 28, 2012 4:23 AM
    There are many functions and levels where OpenCL may not be the proper method to get the best results. OpenCL still lacks a vast array of functionality for GPU-based 3D rendering, for example. Until it catches up, which may or may not even happen at all, CUDA is the only viable cost-effective solution. Not that there's a lot of GPU rendering going on in the industry yet, but it's expanding for certain as Nvidia's solutions take hold in various pipelines.

    Thus, AMD is sometimes not even an option. Nvidia's own mental ray "iRay" (yes, I also hate the name) is CUDA-based, and there aren't many alternatives in the industry. And in some cases, there really doesn't need to be. AMD makes great cards too, but it would be impossible to recommend one for 3D creative content (Maya, 3DS Max, every CAD application) with any degree of honesty.

    Don't get me wrong, I'm not a brand fanatic. I do tend towards AMD CPUs, but Nvidia GPUs are the cornerstone of any nutritious CG artist.
Other Comments
  • 17 Hide
    nikorr , January 27, 2012 11:32 PM
    Good for folding : )
  • Display all 16 comments.
  • 0 Hide
    bak0n , January 28, 2012 2:23 AM
    Is it just my speakers or was he in need of a balancer of some sort for his voice. It was all over the place.
  • 5 Hide
    blazorthon , January 28, 2012 2:26 AM
    10% performance increase? Nothing to complain about there.
  • 3 Hide
    DjEaZy , January 28, 2012 3:09 AM
    ... GPU acceleration FTW!!!
  • 2 Hide
    kronos_cornelius , January 28, 2012 3:14 AM
    Nvidia make an Eclipse Plugin pleeeaseee!
  • 0 Hide
    enewmen , January 28, 2012 3:23 AM
    I thought the APU/GPU world was going towards OpenCL? Sure, nVidia can use CUDA experience as a competitive advantage. But I don't see any long term gains by heavy investing in CUDA specifically. What am I missing?
  • 10 Hide
    lordstormdragon , January 28, 2012 4:23 AM
    There are many functions and levels where OpenCL may not be the proper method to get the best results. OpenCL still lacks a vast array of functionality for GPU-based 3D rendering, for example. Until it catches up, which may or may not even happen at all, CUDA is the only viable cost-effective solution. Not that there's a lot of GPU rendering going on in the industry yet, but it's expanding for certain as Nvidia's solutions take hold in various pipelines.

    Thus, AMD is sometimes not even an option. Nvidia's own mental ray "iRay" (yes, I also hate the name) is CUDA-based, and there aren't many alternatives in the industry. And in some cases, there really doesn't need to be. AMD makes great cards too, but it would be impossible to recommend one for 3D creative content (Maya, 3DS Max, every CAD application) with any degree of honesty.

    Don't get me wrong, I'm not a brand fanatic. I do tend towards AMD CPUs, but Nvidia GPUs are the cornerstone of any nutritious CG artist.
  • 1 Hide
    julianbautista87 , January 28, 2012 12:09 PM
    Hi. I'm a bit lost about this topic, aren't these cards mainly supposed to play videogames instead of these GPCPU applications? Because I bought the HD6850 instead of the gtx 460-765MB because of this (these 2 cards cost the same where I live) , the AMD's card doesn't have this feature, but I have never needed it...
  • 1 Hide
    deanjo , January 28, 2012 12:38 PM
    julianbautista87Hi. I'm a bit lost about this topic, aren't these cards mainly supposed to play videogames instead of these GPCPU applications? Because I bought the HD6850 instead of the gtx 460-765MB because of this (these 2 cards cost the same where I live) , the AMD's card doesn't have this feature, but I have never needed it...


    You are under a misguided assumption that consumer video cards are just for gamers. If that was the case there would be no need for the wide variety of discreet cards out there (or one could even argue no need for a gaming PC period since gaming can be all done on a console).
  • 0 Hide
    madooo12 , January 28, 2012 3:22 PM
    blazorthon10% performance increase? Nothing to complain about there.

    and nVidia is complaining about the 30% increases of the GCN architecture
  • 4 Hide
    __-_-_-__ , January 28, 2012 3:55 PM
    CUDA is the only reason I bought an nvidia card. I use it a lot. unfortunately AMD doesn't implement CUDA.
  • -5 Hide
    Onus , January 28, 2012 4:57 PM
    Now they need to make the drivers play nice so CUDA will work if there is also an AMD card in the system.
  • 2 Hide
    wiyosaya , January 28, 2012 5:45 PM
    lordstormdragon...CUDA is the only viable cost-effective solution.
    Yes, free is certainly cost effective. :D 

    In my opinion, AMD / ATI shot themselves in the foot with this one when they initially put a price on their GPGPU development package. IMHO, NVidia was the smarter one in making CUDA free when the GPGPU packages were first introduced several years back, and I suspect that this is why so many adopted CUDA.

    NVidia realized that they would sell more GPUs if CUDA was free, and selling hardware is what the "game" is all about. M$ has been doing this sort of thing for years by giving away educational and other versions of various stuff like Visual Studio and Word; it has been very successful for them, and seems like it is one of the few things M$ has done that was smart - from a business standpoint.

    The last I heard, though, makes me think that AMD is also now giving away their equivalent as I believe that their package morphed into openCL, and is free.
  • 1 Hide
    deanjo , January 28, 2012 7:57 PM
    jtt283Now they need to make the drivers play nice so CUDA will work if there is also an AMD card in the system.


    The drivers all ready allow other graphic devices to be present in the system and work fine. You are confusing Cuda with Physx.
  • 1 Hide
    Ghost26 , January 28, 2012 9:44 PM
    This is the proof that nVIDIA can effectively do something other than GPUs and boards, unlike AMD. This is the proof that nVIDIA is really ahead of its competitor.

    What makes nVIDIA the greatest graphics company is sure the overall quality of its architecture (Fermi 2.0 is not bad at all, and Kepler will be awesome), but also what they make ARROUND the GPU, just like CUDA, PhysX, and all what makes their card GPGPUs. AMD doesn't do this, and this leads to less efficient boards, almost useless for computing. OpenCL can't rival with CUDA, since this one is so optimized and so efficient. The open source can't compete with this engineering level present at nVIDIA.

    Today, a GPU isn't a simple GPU like yesterday. Today, when you have a nVIDIA board, you have nothing less than a GPGPU and the ability to help science. Just look to all BOINC projects that runs under the CUDA envirnment compared to AMD with OpenCL !

    AMD should really make something out of their boards other than GPU drivers to get the power from their boards. Unfortunately, time goes by and they don't realize they have to make the move ... or simply doesn't have the money for... which wuld not surprise me at all.
  • 0 Hide
    lukeiamyourfather , January 30, 2012 8:08 AM
    Wow, lots of uninformed comments and incorrect information on this page. For example "which integrates, for the first time, the company's LLVM (Low Level Virtual Machine) compiler" which is all kinds of wrong.

    http://en.wikipedia.org/wiki/LLVM