HPC Council Offer Free Time on Nvidia Tesla GPUs

Donated by Nvidia, a member of the HPC Advisory Council, the program is designed to help researchers and developers benchmark their software and optimize their code to run on Tesla GPUs. The goal appears to be the finalization of software to make sure that the software will perform exactly as advertised. The HPC Council told us that the cluster will have 16 GPUs with four being available at this time. The GPU time that can be allocated to a user depends on the specific needs.

"Researchers need an easy way to benchmark their models on the growing number of GPU-accelerated applications before making a buying decision," said Sumit Gupta, director of Tesla business at Nvidia. "The new Center provides a valuable resource to help developers optimize their codes for GPUs, and ensure that applications will perform precisely as advertised."

Gilad Shainer, chairman of the HPC Advisory Council noted that HPC systems have been donated by other member companies in the past, including AMD and Intel.

Create a new thread in the US News comments forum about this subject
This thread is closed for comments
7 comments
    Your comment
  • d_kuhn
    This is a great idea... I purchased a Supermicro GPU server with 4 2090's last year to do precisely this evaluation, having the ability to run evaluation code online (assuming their implementation doesn't add overhead) would give you a great way to spin up one of these GPU supercomputers without needing to invest the several tens of thousand dollars required to buy a box.

    The folks at Supermicro and other oem's who've struggled through NVidia validation for the 2090 (which has no onboard cooling so requires a very dedicated system design) may not be particularly happy about it though.
    2
  • wiyosaya
    D_KuhnThis is a great idea... I purchased a Supermicro GPU server with 4 2090's last year to do precisely this evaluation, having the ability to run evaluation code online (assuming their implementation doesn't add overhead) would give you a great way to spin up one of these GPU supercomputers without needing to invest the several tens of thousand dollars required to buy a box. The folks at Supermicro and other oem's who've struggled through NVidia validation for the 2090 (which has no onboard cooling so requires a very dedicated system design) may not be particularly happy about it though.

    Buzz at BOINC stats is that some consumer cards, i.e., in the 400 and 500 series, outperform some of the Teslas. Teslas sound like NVidia's marketing baby that may or may not perform any better than "consumer" GPUs.

    My bet is that this is another NVidia marketing thrust. IMHO, testing code like this might just as easily be accomplished on a consumer GPU - or, with this service, one could conceivably test on a consumer GPU in house and a Tesla based HPC at the same time to determine whether there is an advantage to a Tesla setup.
    0
  • trandoanhung1991
    wiyosayaBuzz at BOINC stats is that some consumer cards, i.e., in the 400 and 500 series, outperform some of the Teslas. Teslas sound like NVidia's marketing baby that may or may not perform any better than "consumer" GPUs.My bet is that this is another NVidia marketing thrust. IMHO, testing code like this might just as easily be accomplished on a consumer GPU - or, with this service, one could conceivably test on a consumer GPU in house and a Tesla based HPC at the same time to determine whether there is an advantage to a Tesla setup.


    Teslas have a much higher DP rate compared to regular cards. In addition, they're much more stable and heavily tested and certified for continuous running (24/7/365).

    Otherwise, they're just consumer cards, really. They both use the same chips, after all.
    2