Nvidia already touted its Tesla V100 as the world's most advanced data center graphics card. But now it's kicking things up a notch with the brand new, speedier Tesla V100s detailed today by TechPowerUp.
The Tesla V100s is physically identical to the V100. It adheres to a similar dual-slot, PCIe add-in card (AIC) design and employs Nvidia's reference cooler. It's unknown if Nvidia will offer the V100s in the SXM2 board form factor, however.
Like its predecessor, the Tesla V100s is based on Nvidia's current Volta microarchitecture and utilizes the gigantic GV100 silicon. The GV100 die, which comes out of TSMC's 12nm oven, measures 815 mm² and bears 21,100 million transistors. The V100s continues to sport 5,120 CUDA cores and 640 Tensor cores. Nonetheless, the new V100s does pack more performance than both variants of the V100, which is probably attributed to higher core clock speeds.
Nvidia Tesla V100s vs. Tesla V100 Specs
Header Cell - Column 0 | Tesla V100s PCIe | Tesla V100 SXM2 | Tesla V100 PCIe |
---|---|---|---|
Architecture (GPU) | Volta (GV100) | Volta (GV100) | Volta (GV100) |
CUDA Cores | 5,120 | 5,120 | 5,120 |
Tensor Cores | 640 | 640 | 640 |
Double-Precision Performance | 8.2 TFLOPs | 7.8 TFLOPs | 7 TFLOPs |
Single-Precision Performance | 16.4 TFLOPs | 15.7 TFLOPs | 14 TFLOPs |
Tensor Performance | 130 TFLOPs | 125 TFLOPs | 112 TFLOPs |
Texture Units | 320 | 320 | 320 |
Interconnect Bandwidth | 32 GBps | 300 GBps | 32 GBps |
Memory Capacity | 32GB HBM2 | 16GB / 32GB HBM2 | 16GB / 32GB HBM2 |
Memory Bus | 4,096-bit | 4,096-bit | 4,096-bit |
Memory Bandwidth | 1,134 GBps | 900 GBps | 900 GBps |
ROPs | 128 | 128 | 128 |
L2 Cache | 6MB | 6MB | 6MB |
TDP | 250W | 300W | 250W |
Transistor Count | 21.1 billion | 21.1 billion | 21.1 billion |
Die Size | 815 mm² | 815 mm² | 815 mm² |
The V100s delivers up to 17.1% higher single-and double-precision performance than the V100 with the same PCIe format. It also has 16.1% better Tensor performance. Nvidia has clocked the memory on the V100s a bit faster as well.
In terms of memory, it appears that Nvidia might only offer the V100s with 32GB of HBM2. It's unknown if the chipmaker will sell 16GB variants like it did with the V100. The V100s maintains the 4,096-bit memory interface but provides 26% more memory bandwidth in comparison to the V100.
Fortunately, the upgrades on the V100s have no effect on the graphics card's TDP (thermal design power). The V100s is still rated for 250W. As a result, the power requirements, a pair of 8-pin PCIe power connectors, remain the same as well.
Nvidia didn't reveal the pricing for the V100s. The current 16GB and 32GB models of the V100 are selling for $5,855 and $7,200, respectively, on Amazon.
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Zhiye Liu is a news editor and memory reviewer at Tom’s Hardware. Although he loves everything that’s hardware, he has a soft spot for CPUs, GPUs, and RAM.