Future AI processors said to consume up to 15,360 watts of power — massive power draw will demand exotic immersion and embedded cooling tech

Submer
(Image credit: Submer)

The power consumption of AI GPUs has steadily increased in recent years and is expected to continue rising as AI processors incorporate more compute and HBM chiplets. Some of our sources in the industry have indicated that Nvidia is looking at 6,000W to 9,000W for thermal design power for its next-generation GPUs, but experts from KAIST, a leading Korean research institute, believe that the TDP of AI GPUs will increase all the way to 15,360W over the next 10 years. As a result, they will require rather extreme cooling methods, including immersion cooling and even embedded cooling.

(Image credit: KAIST)
Swipe to scroll horizontally

Generation

Year

Total Power of GPU package

Cooling Method

Blackwell Ultra

2025

1,400W

D2C

Rubin

2026

1,800W

D2C

Rubin Ultra

2027

3,600W

D2C

Feynman

2028

4,400W

Immersion Cooling

Feynman Ultra

2029

6,000W*

Immersion Cooling

Post-Feynman

2030

5,920W

Immersion Cooling

Post-Feynman Ultra

2031

9,000W*

Immersion Cooling

?

2032

15,360W

Embedded Cooling

TOPICS
Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • Jame5
    So they aren't actually increasing performance anymore, just moving up the voltage curve in an attempt to juice the numbers?

    Maybe they should think about actually designing chips that do more work in the same power envelope rather than just throwing more and more power at it instead?

    Then again, I don't run a company worth several trillion dollars, so I must be wrong.
    Reply