Overclocking, Underclocking, Efficiency & Temperature
Frequencies & Corresponding Game Performance
Again, we're using The Witcher 3 at 4K for our worst-case scenario. This yields an average of 36.6 FPS at 1305 MHz using the Balanced mode's stock settings, giving us a good 100% baseline.
Switching to the secondary BIOS and dropping the power limit by 25% results in a 1101 MHz clock rate and surprisingly high 32.7 FPS average frame rate. For 85% of the default GPU frequency we measure 89% of the default configuration's performance. Not bad.
Conversely, pushing the card to its limit via overclocking gets us almost 1500 MHz and 39.6 FPS. So, a frequency increase of almost 15% yields just 8% more gaming performance.
From the lowest to the highest average clock rate, we observe an increase of 36.4%. However, gaming performance goes up by just 21.1%. That's not great scaling, but it's not terrible either. What's more important is the power consumption accompanying those numbers. Unfortunately, this is where the data gets ugly.
Using the secondary BIOS with a power limit reduced by 25% gets us 159.4W and 32.7 FPS. Compared to the stock settings, just 71.6% of the power consumption serves up 89% of the gaming performance.
Going back to a worst-case scenario, the overclocked card averages 39.6 FPS, but consumes 310.6W doing so. Trading 39.5% more power consumption for an 8%-higher frame rate isn’t acceptable. The efficiency curve drops off rapidly with increasing clock rates and the additional power those frequencies necessitate.
The spread between the lowest to the highest power consumption is a massive 94%, while gaming performance increases by only 21.1%. Almost double the power consumption for one-fifth more gaming performance is what we'd call catastrophic.
Efficiency & Sweet Spot
If all of the results we generated are combined, and the curves' start and end points are given a common basis, then we get an FPS/watt ratio illustrating the relationship between gaming performance and power consumption. The point at which the distance between both curves is at its greatest represents the so-called sweet spot. This is where the card operates at peak efficiency, right before it starts going the other direction.
Radeon RX Vega 56's sweet spot seems to be right around 188W to 190W, according to our measurements. Strangely, AMD placed three settings close to it, but didn’t place a single one right on.
GPU & Memory Temperatures
The temperatures are very similar across all six power profiles. This is due to the fan's aggressive controller, which responds quickly when power consumption increases. The combination of overclocking and a power limit adjustment of +50% proves to be too much, though. AMD's thermal solution can't keep up, and our measurements show an additional 10°C compared to the predefined settings.
At maximum load, the HBM2 averages up to an additional 6°C on top of the GPU temperature in question. Our overclocking efforts pushed it beyond the 90°C mark, leaving us a little nervous.
Primary BIOS Thermal Images
For each of the two BIOSes, we provide thermal images for all three driver-based profile settings. These show how the card’s power consumption and fan speed affects the temperatures of various on-board components.
As mentioned, though, all of the temperatures are fairly similar until we get to the overclocked configuration. This is due to the fan controller’s ability to maintain a target temperature between 74 and 75°C, regardless of what that means to our ears.
Secondary BIOS Thermal Images
We start with the underclocked settings, resulting in 160W of power consumption. At that level, we notice a deviation from the six driver profile-based average temperatures as well. The difference isn’t as pronounced as it was for the overclocked setting, though.
Similar to what we observed from the primary BIOS, the other temperatures increase only slightly as power consumption rises; they're kept in check by an especially responsive fan.
MORE: Best Graphics Cards
MORE: All Graphics Content