Nvidia to boost AI server racks to megawatt scale, increasing power delivery by five times or more

Nvidia DGX
(Image credit: Nvidia)

Nvidia is developing a new power infrastructure called the 800V HVDC architecture to deliver the power requirements of 1 MW server racks and more, with plans to deploy it by 2027. According to Nvidia, the current 54V DC power distribution system is already reaching its limit as racks begin to exceed 200 kilowatts. As AI chips become more powerful and demand more electricity, these existing systems would no longer be able to practically keep up, requiring data centers to build new solutions so that their electrical circuits do not get overwhelmed.

For example, Nvidia says that its GB200 NVL72 or GB300 NVL72 needs around eight power shelves. If it used 54V DC power distribution, the power shelves would consume 64 U of rack space, which is more than what the average server rack can accommodate. Aside from this, it also said that delivering 1 MW using 54V DC requires a 200 kg copper busbar — that means a gigawatt AI data center, which many companies are now racing to build, would need 500,000 tons of copper. This is nearly half of the U.S.’s total copper output in 2024, and that’s just for one site.

So, instead of using the 54V DC system, which is installed directly at the server cabinet, Nvidia is proposing to use the 800V HVDC, which will connect near the site’s 13.8kV AC power source. Aside from freeing up space in the server racks, this will also streamline the approach and make power transmission within the data center more efficient. It will also remove the multiple AC to DC and DC to DC transformations used in the current system, which added complexity.

The 800V HVDC will also reduce the system current for the same power load, potentially increasing the total wattage delivered by up to 85% without the need to upgrade the conductor. “With lower current, thinner conductors can handle the same load, reducing copper requirements by 45%,” said the company. “Additionally, DC systems eliminate AC-specific inefficiencies, such as skin effect and reactive power losses, further improving efficiency.”

According to Digitimes [machine translated], the AI giant is working with Infineon, Texas Instruments, and Navitas to help develop this system. Furthermore, it’s expected that they will deploy wide-bandgap semiconductors like gallium nitride (GaN) and silicon carbide (SiC) to achieve the high power densities needed by these powerful AI systems. The 800V HVDC is a technical challenge that data centers must solve for power efficiency, especially as they start to breach 1 GW capacity and more. This solution should help them reduce wasted power, which, in turn, would reduce operating costs.

Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

Jowi Morales
Contributing Writer

Jowi Morales is a tech enthusiast with years of experience working in the industry. He’s been writing with several tech publications since 2021, where he’s been interested in tech hardware and consumer electronics.

  • chaz_music
    This has been done before, including a patent from about 30 years ago for UPS system purposes. I believe IBM also looked into this for data centers. One anecdote that I remember was that they found pluggable connections to be quite dangerous at these DC voltages, because of the chance for unplugging the DC while in operation. One study allowing using the ubiquitous IEC 60320 cables for the DC inlets (standard 3-prong PC power cables). That DC bus design was only at ~250VDC if I remember correctly (300V limit for IEC 60320?).

    DC arcing does not quench like an AC arc does due to no inherent zero voltage crossing to naturally commutate the current to zero. So even DC fusing is a different animal - larger fuses are needed for the same power level. The telecom industry used to use -48VDC for central office powering and it was also found to be very dangerous. Telecom DC plant workers could be fired for using non-insulated tools due to the chance of causing fires and having bodily injury. Nothing like having all of your facial hair being singed off.

    I see a learning curve coming again, and industries have to relearn past knowledge. It is kind of fascinating to see that happen over and over as power technology gets adopted in newer industries, such as wind power, solar, EV systems and chargers, and circling back again here in server farm applications.
    Reply
  • DS426
    We should definitely let nVidia be the authority and driver of this massive power distribution system change given what they did on the 4090's power design and then repeat it again on 5090. No faith vote from me, but I guess their saving grace this time is the other respectable engineering and design entities working on this project.

    I agree with @chaz_music that electrical receptacles will (should) be eliminated to reduce risks, hardwiring everything and requiring those big industrial-grade disconnect switches or levers.

    It would definitely be a learning curve, plus requiring some realignments from supply chains. I can only hope that serviceability wouldn't be reduced.
    Reply