Nvidia reportedly boosts Vera Rubin performance to ward hyperscalers off AMD Instinct AI accelerators — increased boost clocks and memory bandwidth pushes power demand by 500 watts to 2300 watts

Nvidia
(Image credit: Nvidia/YouTube)

Recently, Nvidia announced that it had initiated 'full production' of its Vera Rubin platform for AI datacenters, reassuring the partners that it is on track to launch later this year and introducing ahead of its rivals, such as AMD. However, in addition to possibly bringing the release forward, Nvidia is also reportedly revamping the specifications of the Rubin GPU to offer higher performance: reports suggest a TDP increase to 2.30 kW per GPU and a memory bandwidth of 22.2 TB/s.

The Rubin GPU's power rating has now been locked in at 2.3 kW, up from 1.8 kW originally announced by Nvidia, but down from 2.5 kW expected by some market observers, according to Keybanc (via @Jukan05). The intention to increase the power rating from 1.8 kW stems from the desire to ensure that this year's Rubin-based platforms are markedly faster compared to AMD's Instinct MI455X, which is projected to operate at around 1.7 kW. The information about the power budget increase for Rubin comes from an unofficial source, but it is indirectly corroborated by SemiAnalysis, which claims that Nvidia has increased the data transfer rates of HBM4 stacks, and now each Rubin GPU boasts a memory bandwidth of 22.2 TB/s, up from 13 TB/s. We have reached out to Nvidia to try to verify these claims.

Google Preferred Source

Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

TOPICS
Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • DS426
    A 500W difference per GPU is a difference in cooling, especially at the macro level (datacenter cooling capacity). While value appears to go up (potentially buying fewer GPU's or getting higher peak performance), operational cost and lower efficiency aren't a good trade-off for some clients.

    Hopefully AMD maintains the line as there's no point in getting into a power pi**ing match with nVidia.
    Reply
  • blitzkrieg316
    I dont want to hear another word from anyone involved about insufficient power grids and skyrocketing prices
    Reply
  • Cooe
    ... Nobody is going to want this with the GARGANTUAN increase in power and cooling demands. Power and cooling are WAAAAAY more expensive for data centers/hyperscalers vs the actual servers & GPU's as they as are continual expenses whereas the GPU's themselves are a one time expense. The cost of operation with these will be absolute freaking garbage compared to both Nvidia's prior offerings or AMD Instinct. 🤷
    Reply