AMD Radeon R9 380X Nitro Launch Review

Temperature Results

GPU Temperatures

Since there’s a never-ending debate about backplates helping or not helping thermal performance, we decided to test with and without one in place. As mentioned, Sapphire puts pads between the back of its PCB and the backplate right behind the voltage regulation circuitry. It also indents the plate above that area.

We’ll soon see that this can help cool the voltage converters and the components around them. The GPU doesn’t really see a direct benefit, but overall, temperatures are observed to rise more slowly since less heat is transferred to the board itself through the VRM.

Let’s take another look at the back of AMD's Radeon R9 380X and its backplate to give the infrared images some context. The pictures illustrate what we just mentioned: the VRM is connected to the backplate via thermal pads, but the GPU does not enjoy any real advantage. This could have been done more efficiently by cutting a hole for the thermal pads into the isolation foil, which spans the entire surface of the backplate.

Idle Temperatures

First we'll look at the Radeon R9 380X’s temperatures without its backplate at idle. The GPU operates at approximately 43 degrees Celsius, since the card employs a semi-passive mode with its fans disabled. The VRM's temperature is fine as well.

Gaming Temperatures

During our gaming loop, the GPU's diode measures less than the interface temperature due to its proximity to the heat sink; the PCB is actually warming the processor. Adding thermal pads here would have been the logical choice. Since the card forces fan speed higher until its temperature target is reached, any improvement would have resulted in a markedly cooler and quieter graphics card.

With the backplate in place, the temperature difference between the hottest and coldest part of AMD's PCA is 12 degrees Celsius. Without it, we measured the VRM's temperature at 93 degrees Celsius, whereas it’s now at 90 degrees Celsius in the same place, measured via a temperature probe inserted under the backplate. This slightly lower reading has a positive influence on the overall thermal story in the sense that the cover provided by the backplate doesn’t drive up the GPU's temperature. It usually does because the backplate makes it harder to dissipate the GPU’s waste heat.

Full Load Temperatures

Now we see what happens when the VRM delivers more than 250W to the GPU, much of which is dissipated as waste heat. Without the backplate, we’re looking at temperatures in excess of 113 degrees Celsius. That's well beyond acceptable. Consequently, thermal energy spreads across the PCB and heats the GPU from the back. Almost 84 degrees Celsius where Tonga interfaces with the PCB is massive. At that point, the card is both hot and loud.

With the backplate attached, the VRM's temperature is significantly lower at 107 degrees Celsius. The difference between the hottest and coolest parts of the board is now 18 degrees. This proves that a backplate can do more than look pretty and keep the graphics card stable. I can also make a positive impact on thermals if only a few square inches of padding are thrown in.

The VRM’s backplate-based cooling solution has ramifications for the fans as well. This makes sense, since the graphics card will try to achieve its target GPU temperature. Do higher temperatures from the back mean higher rotational speeds?

Temperatures Overview

A comparison between the results with and without the backplate shows that Sapphire does a great job with its cooling solution.

Ambient Temperature
22 °C
Open Bench Table,
Gaming Loop
Open Bench Table,
Torture
Closed Case,
Gaming Loop
Closed Case,
Torture
VRM Maximum
(Torture)
Sapphire R9 380X Nitro
(With Backplate)
69 °C
74 °C
72-73 °C
79 °C
105 °C
Sapphire R9 380X Nitro
(Without Backplate)
69 °C
73 °C
73-74 °C
80-81 °C
113 °C


This thread is closed for comments
95 comments
    Your comment
  • ingtar33
    so full tonga, release date 2015; matches full tahiti, release date 2011.

    so why did they retire tahiti 7970/280x for this? 3 generations of gpus with the same rough number scheme and same performance is sorta sad.
  • wh3resmycar
    this has all the features, freesync, trueaudio, etc.
  • logainofhades
    Features, and power ratings are lower. Tonga is 190w, vs 250w for Tahiti.
  • Eggz
    Seems underwhelming until you read the price. Pretty good for only $230! It's not that much slower than the 970, but it's still about $60 cheaper. Well placed.
  • chaosmassive
    been waiting for this card review, I saw photographer fingers on silicon reflection btw !
  • Onus
    Once again, it appears that the relevance of a card is determined by its price (i.e. price/performance, not just performance). There are no bad cards, only bad prices. That it needs two 6-pin PCIe power connections rather than the 8-pin plus 6-pin needed by the HD7970 is, however, a step in the right direction.
  • FormatC
    Quote:
    I saw photographer fingers on silicon


    I know, this are my fingers and my wedding ring. :P
    Call it a unique watermark. ;)
  • psycher1
    Honestly I'm getting a bit tired of people getting so over-enthusiastic about which resolutions their cards can handle. I barely think the 970 is good enough for 1080p.

    With my 2560*1080 (to be fair, 33% more pixels) panel and a 970, I can almost never pull off ultimate graphic settings out of modern games, with the Witcher 3 only averaging about 35fps while at medium-high according to GeForce Experience.

    If this is already the case, give it a year or two. Future proofing does not mean you should need to consider sli after only 6 months and a minor display upgrade.
  • Eggz
    1272112 said:
    Honestly I'm getting a bit tired of people getting so over-enthusiastic about which resolutions their cards can handle. I barely think the 970 is good enough for 1080p. With my 2560*1080 (to be fair, 33% more pixels) panel and a 970, I can almost never pull off ultimate graphic settings out of modern games, with the Witcher 3 only averaging about 35fps while at medium-high according to GeForce Experience. If this is already the case, give it a year or two. Future proofing does not mean you should need to consider sli after only 6 months and a minor display upgrade.


    Yeah, I definitely think that the 980 ti, Titan X, FuryX, and Fury Nano are the first cards that adequately exceed 1080p. No cards before those really allow the user to forget about graphics bottlenecks at a higher standard resolution. But even with those, 1440p is about the most you can do when your standards are that high. I consider my 780 ti almost perfect for 1080p, though it does bottleneck here and there in 1080p games. Using 4K without graphics bottlenecks is a lot further out than people realize.
  • ByteManiak
    everyone is playing GTA V and Witcher 3 in 4K at 30 fps and i'm just sitting here struggling to get a TNT2 to run Descent 3 at 60 fps in 800x600 on a Pentium 3 machine
  • blazorthon
    Quote:
    Yeah, I definitely think that the 980 ti, Titan X, FuryX, and Fury Nano are the first cards that adequately exceed 1080p. No cards before those really allow the user to forget about graphics bottlenecks at a higher standard resolution. But even with those, 1440p is about the most you can do when your standards are that high. I consider my 780 ti almost perfect for 1080p, though it does bottleneck here and there in 1080p games. Using 4K without graphics bottlenecks is a lot further out than people realize.


    The games we have today also happen to be the first games that pushed cards like the 7970 and the 680 to no longer be adequate for 1440p, let alone 1080p with settings maxed out. It's a pointless chicken and the egg argument.
  • ErikVinoya
    With those temps and the backplate, I think with a bit of oil, I can make breakfast while running furmark on that
  • blazorthon
    1027081 said:
    so full tonga, release date 2015; matches full tahiti, release date 2011. so why did they retire tahiti 7970/280x for this? 3 generations of gpus with the same rough number scheme and same performance is sorta sad.


    I fail to see how lower power consumption, new features, higher memory capacity, and lower prices (both to manufacture and for us to buy) are sad. It isn't as if the 380X is now the high end. It is just a mid-ranged card that is replacing a formerly high-end card. This happens every graphics card generation. GTX 960 and GTX 580, Radeon 7850 and Radeon 5870, the list goes on and on. Furthermore, the 380X has a considerable performance advantage over the 7970 despite having the same core count because core count is far from everything and many other things changed from Tahiti to Tonga/Antigua. The 380X is easily about 20% faster than the 7970.
  • eklipz330
    your perception of what is enough for 1080p may differ from the author's. think about it. my r9 290 could handle most games at 1080p above 60fps, and many of them above 100fps. that is a VERY capable card at 1080p, and probably more than enough for the average gamer. not everyone has the itch to max every setting out; i personally like lowering the resolution and increasing AA.
  • Eggz
    412399 said:
    The games we have today also happen to be the first games that pushed cards like the 7970 and the 680 to no longer be adequate for 1440p, let alone 1080p with settings maxed out. It's a pointless chicken and the egg argument.


    That's probably true to a certain extent, but I'm not sure I entirely agree. You'll be able to run today's most demanding games, and future games, with max setting at low resolutions on current- and past-gen cards. If you take the chicken and egg idea to its logical end, then there would eventually be a game that not even the most powerful card could not run at even at the lowest resolution possible. But that's not what happens. With a good CPU, there are cards that will be able to run anything at (for example) 640x480, no matter what settings, and no matter which game.

    There's just an upper limit to processing requirements based on resolution and a target frame rate. Games don't usually reach that upper limit, but there is one. Ray tracing might be the best real-world approximation of demanding the most out of a certain resolution and frame rate. Whatever that limit is, though, there will be a card that can handle it for lower resolutions. If that weren't true, then someone would be able to build a color switching app for one (1) pixel, a resolution of 1x1, that could bring down the Titan X to less than 60 fps at all times. I don't think that would be possible with proper coding applied.

    That plays out in the real world by certain cards being able to handle any game, at any setting, at a certain resolution while maintaining a targeted frame rate. I'd bet $1,000 that a resolution of 16x9 would be incapable of bringing the frame rate below 60 fps for any game to exist in the foreseeable future using a flagship graphics card (setting aside incompatibilities and non-graphics bottlenecks).

    I could be totally wrong about there being an upper limit on processing requirements within a bound resolution and minimum frame rate, and I'm open to being shown why that upper limit theory isn't true. But I'm inclined to think it's true until I see exactly where I'm off.
  • red77star
    My HD7970 Crossfire setup still going strong. Every single game maxed on 1080p
  • spentshells
    I am dissapoint, I was expecting the full 384 mem bus and substantially higher performance. I had been hyping this for a while as the reason for NV to release the 960 ti's and it most certainly won't be.
  • blazorthon
    1406980 said:
    That's probably true to a certain extent, but I'm not sure I entirely agree. You'll be able to run today's most demanding games, and future games, with max setting at low resolutions on current- and past-gen cards. If you take the chicken and egg idea to its logical end, then there would eventually be a game that not even the most powerful card could not run at even at the lowest resolution possible. But that's not what happens. With a good CPU, there are cards that will be able to run anything at (for example) 640x480, no matter what settings, and no matter which game. There's just an upper limit to processing requirements based on resolution and a target frame rate. Games don't usually reach that upper limit, but there is one. Ray tracing might be the best real-world approximation of demanding the most out of a certain resolution and frame rate. Whatever that limit is, though, there will be a card that can handle it for lower resolutions. If that weren't true, then someone would be able to build a color switching app for one (1) pixel, a resolution of 1x1, that could bring down the Titan X to less than 60 fps at all times. I don't think that would be possible with proper coding applied. That plays out in the real world by certain cards being able to handle any game, at any setting, at a certain resolution while maintaining a targeted frame rate. I'd bet $1,000 that a resolution of 16x9 would be incapable of bringing the frame rate below 60 fps for any game to exist in the foreseeable future using a flagship graphics card (setting aside incompatibilities and non-graphics bottlenecks). I could be totally wrong about there being an upper limit on processing requirements within a bound resolution and minimum frame rate, and I'm open to being shown why that upper limit theory isn't true. But I'm inclined to think it's true until I see exactly where I'm off.


    Quote:
    Yeah, I definitely think that the 980 ti, Titan X, FuryX, and Fury Nano are the first cards that adequately exceed 1080p. No cards before those really allow the user to forget about graphics bottlenecks at a higher standard resolution. But even with those, 1440p is about the most you can do when your standards are that high. I consider my 780 ti almost perfect for 1080p, though it does bottleneck here and there in 1080p games. Using 4K without graphics bottlenecks is a lot further out than people realize.


    Go back a few years, say to when the 7970 GHz edition launched so we have pretty much all of that generation's cards out and most driver kinks were worked out on both sides. The 7970 cards and the 680 were capable of running pretty much all games in 1440p in ultra and the 7850/650 Ti Boost could run pretty much all games in 1080p in ultra, all assuming there wasn't another bottleneck (such as a weak CPU).

    So, the current high end cards were not the first cards to be adequate for 1440p, let alone 1080p. Today's top end cards are merely the first to be able to run today's most intensive games in their heavier settings at 1080p and 1440p. This isn't just true to some extent because the older reviews prove it (have a look at some Radeon 7970 GHz Edition reviews for a good lineup). This is what I meant by chicken and the egg. Current top-end cards don't let you "forget about graphics bottlenecks at a higher standard resolution [1440p]," anymore than top end cards from previous generations did with corresponding games.

    I don't see how low resolutions, settings, and any performance limits related to them and CPUs are relevant to this.

    Anything with a resolution of one pixel might be useless for this topic. I don't think things like tessellation, AA, AF, and so on can be performed on a resolution of one pixel. In that case, yes, there might be a limit to how much performance can be necessary before it becomes arbitrary, but I say that out of not knowing what modern features can really do with only one pixel. With a real resolution, let's use a low one like 640x480, you can always apply more and more settings, filters, etc. until you bog down even Titan X. Whether or not a game happens to support applying enough of these to such a resolution is another matter, but it can be done if some dev wanted to make such a program. The real limit is at what point you're throwing more and more resources at the problem without a discernible improvement in visual quality. For example, we could make a game that runs at 640x480 with 1024x MSAA or something like that, but why bother when you won't see any benefit from a minuscule fraction of that?
  • psycher1
    So we're at least semi agreeing that, limiting the discussion to modern titles and looking forward, cards like the title articles 380x are most definitely NOT 1440p cards, and at best are 1080p cards that are already staring that limit in the face as well.

    With the 680 being talked about in the past tense so soon, I wonder if I didn't even underestimate how long my 970 will last.

    Anybody here think that dx12 and other such software improvements will help keep these cards relevant? Or are we still looking at a ~3 year upgrade cycle to just run games at decent settings, even with mostly top-of-the-line hardware today?
  • turkey3_scratch
    I don't think this card cuts it with the $230-240 price tag, when a 380 can be found for $170. That is 35% more price for about 8% more performance. However, I would not be surprised if the 380X slowly goes down to below $200 and knocks the 380 out of sale. If the 380X was piced at $200 then it would be a good deal.

    Sure, you can compare it to the fact that the 380 cost about what $230 or so when it was released, but the 380 has been around a good deal of time and the prices have dropped significantly.

    Also, why on earth are they using a Powercolor 390 and an MSI 970 rather than using the MSI 390?
  • TbsToy
    Oh boy, another graphics card that comes to save the day for a non question and a nonexistent need. Uhhhh, how much does it cost again?
    Walt Prill
  • turkey3_scratch
    All graphics cards are usually bad value when they're released. It's just like any product in this world. Give it 2 months and this card will be competing.
  • monsta
    another over hyped release from AMD with lack lustre performance upgrades, this is embarrassing
  • eodeo
    You would do well to look how sleek Tech power up's charts look like and how neatly per game and overall they are organized.

    More to the point of this post- you say that 380x uses virtualy the same idle power with two monitors, while TPU says its typical AMD crapscore of what it used to be- 3-4x more than it should and than you say it is. Did they make a mistake or did you?

    http://www.techpowerup.com/reviews/ASUS/R9_380X_Strix/21.html

    Also, while here why don't you still mention how poorly AMD video viewing power draw is? Along with the multi monitor power draw, video playback has been a huge negative on AMD for the past 3 years- ever since nvidia decided to fix it in 2012 on all their dx11 GPUs via drivers...