Nvidia, Udacity Offer Free Parallel Programming Courses

The company said it is partnering with Udacity to provide insight in parallel programming with a class that is designed to hold material worth about one week of studying. This specific class is described as an introduction to the development model and covers CUDA coding with "a series of image processing algorithms, such as you might find in Photoshop or Instagram."

Students can test their code on the company's recently announced cloud GPUs. The class is taught by John Owens, associate professor of electrical and computer engineering at the University of California, Davis, as well as David Luebke senior director of graphics research at Nvidia.

Anyone can sign up for the course, so check it out if you're interested.

Contact Us for News Tips, Corrections and Feedback           

  • warezme
    what?, my cock thinks of gpu pron
    Reply
  • Estix
    It seems like Nvidia is realizing what Microsoft did a long time ago; you want people to code for your platform, even if it means spending a bit to attract them.
    Reply
  • Old_Fogie_Late_Bloomer
    Coursera is offering a six-week program on this subject that's supposed to start this month...I'm already signed up and I can't wait for it to start. :)

    https://www.coursera.org/course/hetero
    Reply
  • A Bad Day
    Your standard software company's disconnected management:

    "Hm, do we either send some of the programmers away for at least week while still paying them, or do we launch the software by Thanksgiving?

    You know what, F*** parallel computing or whatever it's called, I need something on the deadline!"
    Reply
  • idono
    Signedup right now.
    Reply
  • morstern
    "Hm, do we either send some of the programmers away for at least week while still paying them, or do we launch the software by Thanksgiving?..."

    I think yes we invest in current and future training for our programmers so that we stay relavent.
    Reply
  • falchard
    Problem is its based on CUDA and not OpenCL or DirectCompute. Who cares about CUDA?
    Reply
  • devBunny
    falchardProblem is its based on CUDA and not OpenCL or DirectCompute. Who cares about CUDA?
    To a newbie, any experience in parallel programming is invaluable. If the course has enough theory in it then much of the acquired skill will transfer to OpenCL and DirectCompute. Similarly, the course is based on image processing so it might be tempting to say "Who cares?" if you want to learn to use GPUGP for, say, simulations, but, again, valuable lessons can be learned.
    Reply
  • CUDA is easier to learn than OpenCL, and it's even recommended to learn CUDA first to get familiar with parallel programming concepts, even if you later want to move on to OpenCL, etc
    Reply
  • jowens
    As you might imagine, we had a lot of conversations about the right way to teach parallel computing using GPUs. Three things to note, my opinions here only. I think anyone who's programmed in both would agree that CUDA is simpler, particularly at the outset; setting up a "hello, world" kind of program on CUDA is just a lot easier than with OpenCL. Also, I think once you understand either CUDA or OpenCL, it's a piece of cake to pick up the other one; learning the first one is a lot more challenging than learning the second one, so I imagine anyone that takes the course and then wants to pick up OpenCL will have little difficulty. Finally, the ecosystem around CUDA is just a lot more developed than around OpenCL, so if students want/need to go grab a parallel sort routine or scan or other primitives, they'll have a lot easier time in CUDA than in OpenCL, which is important as they start writing more complex programs.

    In my research group we program in both CUDA and OpenCL, and we get significant funding from both Intel and AMD who are both OpenCL-focused; but I think I would accurate in saying that strictly from a technical perspective, all my grad students would prefer to code in CUDA if they had to choose.
    Reply