Just prior to Nvidia's online GTC keynote, a trademark filing looked to refer to the company's next-generation Ampere-based DGX system called the A100. The keynote confirmed the arrival of this system, but full specifications weren't available yet -- we knew about the Ampere A100 GPUs, but CPU information was still missing. Now, Nvidia has given AMD the honor of sharing the last bits of the spec sheet in a formal announcement.
Now before you ask why AMD? Well, that's because of the two 64-core AMD Epyc 7742 processors that are installed. Tally that up, and you'll soon realize that the DGX A100 systems pack a total of 128 cores and a whopping 256 threads, all running at 3.4 GHz.
“The NVIDIA DGX A100 delivers a tremendous leap in performance and capabilities,” said Charlie Boyle, VP and GM for DGX systems at NVIDIA. “The 2nd Gen AMD EPYC processors used in DGX A100 provide high performance and support for PCIe Gen4. NVIDIA has put those features to work to create the world’s most powerful AI system while maintaining compatibility with the GPU-optimized software stack used across the entire DGX family.”
The Nvidia DGX A100 packs a total of eight Nvidia A100 GPUs (which are no longer called Tesla to avoid confusion with the automaker). Each GPU measures 826 square mm and packs 54-billion transistors, and all eight are linked through 600 GB/s NVSwitch links. In total, the eight GPUs deliver a jolly 5 petaflops of power.
Nvidia's DGX systems are aimed at scientific and data center use, intended to be used for machine learning and artificial intelligence simulations. Pricing for a DGX system starts at just $199,000.