Skip to main content

Does Undervolting Improve Radeon RX Vega 64's Efficiency?

Temperatures & A Surprise

Temperature Sensor Issues?

At idle, the Radeon RX Vega 64 reports a chilly 16°C, even though the water temperature cooling it is a more temperate 20°C. Ryzen already demonstrated that temperature sensors aren’t really AMD’s strong suit. Even so, it’s weird that the readings are this far off, especially since the sensors in previous cards reported fairly accurate data.

The 24°C result gives us pause, seeing that we obtained it using the maximum power limit in combination with overclocking, dissipating more than 400W in the process. Our cooling block and thermal paste alone should result in a ~4°C difference. Either way, we tabulated these “GPU temperatures” exactly the way they were reported by WattMan, GPU-Z, and others.

In case you're looking for results at 1V with AMD's default power limit, they're identical to what we found under Balanced mode, so we omitted them to make our graphs easier to read.

The Search for True Temperatures

Understandably, we didn't believe our results. After talking to the author of GPU-Z, we put more stock in what his tool reports as the processor hot-spot than AMD's sensor readings. Unfortunately, AMD doesn’t provide any documentation for its value, and an analysis of the accessible sensor loop results didn’t clear things up either. However, the results certainly look more plausible than what we were seeing before:

We do have data that we gathered from different GPUs using the compressor cooler and a good water block, which we can use for comparison. This leads us to believe that GPU-Z’s hot-spot results are a lot more trustworthy than the numbers provided by WattMan and older tools. A direct comparison between all of the results under Balanced mode yields the following picture:

There’s another interesting side to this. If the package’s temperatures are really higher than what's being reported, then there might be problems for air-cooled cards already showing GPU temperatures well above 80°C.

Up close, AMD uses X6S MLCC capacitors. These have an absolute temperature limit of 105°C, which even an air-cooled card shouldn’t quite reach. However, their capacity decreases significantly, which is to say up to 22 percent, at temperatures above 90°C. This could lead to instability. Using X7R or R capacitors instead would have made for a better long-term solution, especially since high ripple values cause the optimal operating temperature to decline even further. Call this one more reason to use water cooling if possible.

That the temperatures might be higher than reported could also explain another finding: even though no errors show up on the screen, the HBM2 loses performance if it’s overclocked. This might be due to it running significantly hotter than current tools are reporting. Some of AMD’s board partners have asked the company about this, but haven't received a response. We do have one trustworthy source that puts these temperatures well above 90°C, though.

Thermal measurements from the card's back side provide additional evidence that the mysterious GPU hot-spot’s really just the GPU temperature. Even though it’s true that the temperatures are a little different at very high power consumption values, they are completely identical all the way up to 310W. The lower results under extreme conditions could be due to indirect cooling of the board via the large amount of copper contained in the circuitry, which would increase along with the power consumption.

Addendum: Infrared Measurements

Image 1 of 5

Image 2 of 5

Image 3 of 5

Image 4 of 5

Image 5 of 5

It goes without saying that we meticulously documented our installation just like we always do. EKWB’s block does a great job; this should be just about as good as it gets at this point.


MORE: Best Graphics Cards


MORE: Desktop GPU Performance Hierarchy Table


MORE: All Graphics Content