AMD Radeon R9 380X Nitro Launch Review

How We Test

Test System

Our reference system hasn’t changed from a performance standpoint, though we did alter its cooling solution. Instead of the closed-loop liquid cooler, we're now using an open-loop solution by Alphacool with two e-Loop fans from Noiseblocker.

We changed our test system for two reasons. First, the near-silent setup enables us to take more accurate noise measurements. Second, it provides our new and even higher-resolution infrared camera with an unobstructed 360-degree view of the graphics card. There are no more tubes or other obstacles.

We follow the general trend and use Windows 10, which keeps us current and allows us to use DirectX 12.


Test Method
Contact-free DC measurement at PCIe slot (using a riser card)
Contact-free DC measurement at external auxiliary power supply cable
Direct Voltage Measurement at Power Supply
Real-time infrared monitoring and recording

Test Equipment
2 x HAMEG HMO 3054, 500MHz digital multi-channel oscilloscope with storage function
4 x HAMEG HZO50 current probe (1mA - 30A, 100kHz, DC)
4 x HAMEG HZ355 (10:1 probes, 500MHz)
1 x HAMEG HMC 8012 digital multimeter with storage function
1 x Optris PI450 80Hz infrared camera + PI Connect

Test System
Intel Core i7-5930K @ 4.2GHz
Alphacool water cooler (NexXxos CPU cooler, VPP655 pump, Phobya balancer, 240mm radiator)
Crucial Ballistix Sport, 4 x 4GB DDR4-2400
MSI X99S XPower AC
1x Crucial MX200, 500GB SSD (system)
1x Corsair Force LS 960GB SSD (applications, data)
beQuiet Dark Power Pro 850W PSU
Windows 10 Pro (all updates)
Driver
AMD: 15.11.1 Beta (press driver)
Nvidia: ForceWare 358.91 Game Ready
Gaming
Benchmarks
The Witcher 3: Wild Hunt
Grand Theft Auto V (GTA V)
Metro Last Light
Bioshock Infinite
Tomb Raider
Battlefield 4
Middle Earth: Shadow of Mordor
Thief
Ashes of the Singularity

Graphics Card Comparison

We’re using AMD’s Radeon R9 Fury Nano as the top of our range, allowing us to draw direct comparisons between it, the Sapphire R9 380X Nitro, a PowerColor Radeon R9 390, and a “smaller” MSI Radeon R9 380. These graphics cards should cover the entire range of AMD’s offerings in this segment. We were planning to add the Radeon R9 390X, but it’s just too close to the 390. In the end, we wanted to include at least one faster card for the higher-resolution tests.

MSI's GTX 970 4G and Gigabyte's GTX 960 Windforce represent Nvidia's portfolio. Interestingly, there's a large gap in the line-up between those two offerings. Or, seen from AMD’s point of view, there’s an opening to exploit that Nvidia created with its somewhat weaker GeForce GTX 960.

Benchmark Settings and Resolutions

The benchmarks are set to taxing detail presets, since that's what we expect someone who buys a graphics card in this price range to run. In order to demonstrate differences between the cards at progressively higher resolutions, we’re using Full HD (1920x1080) and QHD (2560x1440). AMD's new graphics card is specifically targeted toward the latter, it says.

Frame Rate and Frame Time

We completely updated how we represent frame time variance. In the end, percentages just don’t tell the whole story for longer benchmarks, which can have very different sections when it comes to rendering speed. We’ve settled on two ways of conveying the results. First, we show how long it takes to render each individual frame, telling you a lot more than bar graphs or an FPS graph based on averages. Second, we plot two different evaluations of each frame’s time.

We start by normalizing each frame time by subtracting the average of the overall benchmark’s frame times. This puts the curves for each graphics card at a common average on the x-axis. This allows us to more easily spot outliers. After doing this, we assess the curve’s smoothness, which is to say that we’re looking at the relative differences in render time between the frames. This helps us to find subjectively annoying stuttering or jumps more easily without having the actual frame time influence the curve.

Power Consumption Measurement Methodology

Our power consumption testing methodology is described in The Math Behind GPU Power Consumption And PSUs. It's the only way we can achieve readings that facilitate sound conclusions about efficiency. We need two oscilloscopes in a master-slave setup to be able to record all eight channels at the same time (4 x voltage, 4 x current). Each PCIe power connector is measured separately.

A riser card is used on the PCIe slot (PEG) to measure power consumption directly on the motherboard for the 3.3 and 12V rails. The riser card was built specifically for this purpose.

We are using time intervals of 1 ms for our analyses. The equipment cumulates the natively even more high resolution data for us so that we don't completely drown in the sheer amount of data that this system generates.

Infrared Measurement with the Optris PI640

We’ve identified a method to confirm what our sensors tell us and to spice up our usual temperature graphs a bit in the form of the PI640 by Optris. This piece of equipment is an infrared camera developed specifically for process monitoring. It allows us to shoot both videos and still shots at a good resolution, providing us with not just peak temperatures, but also a good view of any weak points in the graphics card's design.

Optris' PI640 supplies real-time thermal images at a rate of 32Hz. The pictures are sent via USB to a separate system, where they can be recorded as video. The PI640’s thermal sensitivity is 75mK, making it ideal for assessing small gradients.

Noise

As always, we use a high-quality microphone placed perpendicular to the center of the graphics card at a distance of 50cm. The results are analyzed with Smaart 7.

The ambient noise when our readings were recorded at night never rose above 26 dB(A). This was noted and accounted for separately during each measurement. The setup was calibrated on a regular basis as well.

This thread is closed for comments
95 comments
    Your comment
  • ingtar33
    so full tonga, release date 2015; matches full tahiti, release date 2011.

    so why did they retire tahiti 7970/280x for this? 3 generations of gpus with the same rough number scheme and same performance is sorta sad.
  • wh3resmycar
    this has all the features, freesync, trueaudio, etc.
  • logainofhades
    Features, and power ratings are lower. Tonga is 190w, vs 250w for Tahiti.
  • Eggz
    Seems underwhelming until you read the price. Pretty good for only $230! It's not that much slower than the 970, but it's still about $60 cheaper. Well placed.
  • chaosmassive
    been waiting for this card review, I saw photographer fingers on silicon reflection btw !
  • Onus
    Once again, it appears that the relevance of a card is determined by its price (i.e. price/performance, not just performance). There are no bad cards, only bad prices. That it needs two 6-pin PCIe power connections rather than the 8-pin plus 6-pin needed by the HD7970 is, however, a step in the right direction.
  • FormatC
    Quote:
    I saw photographer fingers on silicon


    I know, this are my fingers and my wedding ring. :P
    Call it a unique watermark. ;)
  • psycher1
    Honestly I'm getting a bit tired of people getting so over-enthusiastic about which resolutions their cards can handle. I barely think the 970 is good enough for 1080p.

    With my 2560*1080 (to be fair, 33% more pixels) panel and a 970, I can almost never pull off ultimate graphic settings out of modern games, with the Witcher 3 only averaging about 35fps while at medium-high according to GeForce Experience.

    If this is already the case, give it a year or two. Future proofing does not mean you should need to consider sli after only 6 months and a minor display upgrade.
  • Eggz
    1272112 said:
    Honestly I'm getting a bit tired of people getting so over-enthusiastic about which resolutions their cards can handle. I barely think the 970 is good enough for 1080p. With my 2560*1080 (to be fair, 33% more pixels) panel and a 970, I can almost never pull off ultimate graphic settings out of modern games, with the Witcher 3 only averaging about 35fps while at medium-high according to GeForce Experience. If this is already the case, give it a year or two. Future proofing does not mean you should need to consider sli after only 6 months and a minor display upgrade.


    Yeah, I definitely think that the 980 ti, Titan X, FuryX, and Fury Nano are the first cards that adequately exceed 1080p. No cards before those really allow the user to forget about graphics bottlenecks at a higher standard resolution. But even with those, 1440p is about the most you can do when your standards are that high. I consider my 780 ti almost perfect for 1080p, though it does bottleneck here and there in 1080p games. Using 4K without graphics bottlenecks is a lot further out than people realize.
  • ByteManiak
    everyone is playing GTA V and Witcher 3 in 4K at 30 fps and i'm just sitting here struggling to get a TNT2 to run Descent 3 at 60 fps in 800x600 on a Pentium 3 machine
  • blazorthon
    Quote:
    Yeah, I definitely think that the 980 ti, Titan X, FuryX, and Fury Nano are the first cards that adequately exceed 1080p. No cards before those really allow the user to forget about graphics bottlenecks at a higher standard resolution. But even with those, 1440p is about the most you can do when your standards are that high. I consider my 780 ti almost perfect for 1080p, though it does bottleneck here and there in 1080p games. Using 4K without graphics bottlenecks is a lot further out than people realize.


    The games we have today also happen to be the first games that pushed cards like the 7970 and the 680 to no longer be adequate for 1440p, let alone 1080p with settings maxed out. It's a pointless chicken and the egg argument.
  • ErikVinoya
    With those temps and the backplate, I think with a bit of oil, I can make breakfast while running furmark on that
  • blazorthon
    1027081 said:
    so full tonga, release date 2015; matches full tahiti, release date 2011. so why did they retire tahiti 7970/280x for this? 3 generations of gpus with the same rough number scheme and same performance is sorta sad.


    I fail to see how lower power consumption, new features, higher memory capacity, and lower prices (both to manufacture and for us to buy) are sad. It isn't as if the 380X is now the high end. It is just a mid-ranged card that is replacing a formerly high-end card. This happens every graphics card generation. GTX 960 and GTX 580, Radeon 7850 and Radeon 5870, the list goes on and on. Furthermore, the 380X has a considerable performance advantage over the 7970 despite having the same core count because core count is far from everything and many other things changed from Tahiti to Tonga/Antigua. The 380X is easily about 20% faster than the 7970.
  • eklipz330
    your perception of what is enough for 1080p may differ from the author's. think about it. my r9 290 could handle most games at 1080p above 60fps, and many of them above 100fps. that is a VERY capable card at 1080p, and probably more than enough for the average gamer. not everyone has the itch to max every setting out; i personally like lowering the resolution and increasing AA.
  • Eggz
    412399 said:
    The games we have today also happen to be the first games that pushed cards like the 7970 and the 680 to no longer be adequate for 1440p, let alone 1080p with settings maxed out. It's a pointless chicken and the egg argument.


    That's probably true to a certain extent, but I'm not sure I entirely agree. You'll be able to run today's most demanding games, and future games, with max setting at low resolutions on current- and past-gen cards. If you take the chicken and egg idea to its logical end, then there would eventually be a game that not even the most powerful card could not run at even at the lowest resolution possible. But that's not what happens. With a good CPU, there are cards that will be able to run anything at (for example) 640x480, no matter what settings, and no matter which game.

    There's just an upper limit to processing requirements based on resolution and a target frame rate. Games don't usually reach that upper limit, but there is one. Ray tracing might be the best real-world approximation of demanding the most out of a certain resolution and frame rate. Whatever that limit is, though, there will be a card that can handle it for lower resolutions. If that weren't true, then someone would be able to build a color switching app for one (1) pixel, a resolution of 1x1, that could bring down the Titan X to less than 60 fps at all times. I don't think that would be possible with proper coding applied.

    That plays out in the real world by certain cards being able to handle any game, at any setting, at a certain resolution while maintaining a targeted frame rate. I'd bet $1,000 that a resolution of 16x9 would be incapable of bringing the frame rate below 60 fps for any game to exist in the foreseeable future using a flagship graphics card (setting aside incompatibilities and non-graphics bottlenecks).

    I could be totally wrong about there being an upper limit on processing requirements within a bound resolution and minimum frame rate, and I'm open to being shown why that upper limit theory isn't true. But I'm inclined to think it's true until I see exactly where I'm off.
  • red77star
    My HD7970 Crossfire setup still going strong. Every single game maxed on 1080p
  • spentshells
    I am dissapoint, I was expecting the full 384 mem bus and substantially higher performance. I had been hyping this for a while as the reason for NV to release the 960 ti's and it most certainly won't be.
  • blazorthon
    1406980 said:
    That's probably true to a certain extent, but I'm not sure I entirely agree. You'll be able to run today's most demanding games, and future games, with max setting at low resolutions on current- and past-gen cards. If you take the chicken and egg idea to its logical end, then there would eventually be a game that not even the most powerful card could not run at even at the lowest resolution possible. But that's not what happens. With a good CPU, there are cards that will be able to run anything at (for example) 640x480, no matter what settings, and no matter which game. There's just an upper limit to processing requirements based on resolution and a target frame rate. Games don't usually reach that upper limit, but there is one. Ray tracing might be the best real-world approximation of demanding the most out of a certain resolution and frame rate. Whatever that limit is, though, there will be a card that can handle it for lower resolutions. If that weren't true, then someone would be able to build a color switching app for one (1) pixel, a resolution of 1x1, that could bring down the Titan X to less than 60 fps at all times. I don't think that would be possible with proper coding applied. That plays out in the real world by certain cards being able to handle any game, at any setting, at a certain resolution while maintaining a targeted frame rate. I'd bet $1,000 that a resolution of 16x9 would be incapable of bringing the frame rate below 60 fps for any game to exist in the foreseeable future using a flagship graphics card (setting aside incompatibilities and non-graphics bottlenecks). I could be totally wrong about there being an upper limit on processing requirements within a bound resolution and minimum frame rate, and I'm open to being shown why that upper limit theory isn't true. But I'm inclined to think it's true until I see exactly where I'm off.


    Quote:
    Yeah, I definitely think that the 980 ti, Titan X, FuryX, and Fury Nano are the first cards that adequately exceed 1080p. No cards before those really allow the user to forget about graphics bottlenecks at a higher standard resolution. But even with those, 1440p is about the most you can do when your standards are that high. I consider my 780 ti almost perfect for 1080p, though it does bottleneck here and there in 1080p games. Using 4K without graphics bottlenecks is a lot further out than people realize.


    Go back a few years, say to when the 7970 GHz edition launched so we have pretty much all of that generation's cards out and most driver kinks were worked out on both sides. The 7970 cards and the 680 were capable of running pretty much all games in 1440p in ultra and the 7850/650 Ti Boost could run pretty much all games in 1080p in ultra, all assuming there wasn't another bottleneck (such as a weak CPU).

    So, the current high end cards were not the first cards to be adequate for 1440p, let alone 1080p. Today's top end cards are merely the first to be able to run today's most intensive games in their heavier settings at 1080p and 1440p. This isn't just true to some extent because the older reviews prove it (have a look at some Radeon 7970 GHz Edition reviews for a good lineup). This is what I meant by chicken and the egg. Current top-end cards don't let you "forget about graphics bottlenecks at a higher standard resolution [1440p]," anymore than top end cards from previous generations did with corresponding games.

    I don't see how low resolutions, settings, and any performance limits related to them and CPUs are relevant to this.

    Anything with a resolution of one pixel might be useless for this topic. I don't think things like tessellation, AA, AF, and so on can be performed on a resolution of one pixel. In that case, yes, there might be a limit to how much performance can be necessary before it becomes arbitrary, but I say that out of not knowing what modern features can really do with only one pixel. With a real resolution, let's use a low one like 640x480, you can always apply more and more settings, filters, etc. until you bog down even Titan X. Whether or not a game happens to support applying enough of these to such a resolution is another matter, but it can be done if some dev wanted to make such a program. The real limit is at what point you're throwing more and more resources at the problem without a discernible improvement in visual quality. For example, we could make a game that runs at 640x480 with 1024x MSAA or something like that, but why bother when you won't see any benefit from a minuscule fraction of that?
  • psycher1
    So we're at least semi agreeing that, limiting the discussion to modern titles and looking forward, cards like the title articles 380x are most definitely NOT 1440p cards, and at best are 1080p cards that are already staring that limit in the face as well.

    With the 680 being talked about in the past tense so soon, I wonder if I didn't even underestimate how long my 970 will last.

    Anybody here think that dx12 and other such software improvements will help keep these cards relevant? Or are we still looking at a ~3 year upgrade cycle to just run games at decent settings, even with mostly top-of-the-line hardware today?
  • turkey3_scratch
    I don't think this card cuts it with the $230-240 price tag, when a 380 can be found for $170. That is 35% more price for about 8% more performance. However, I would not be surprised if the 380X slowly goes down to below $200 and knocks the 380 out of sale. If the 380X was piced at $200 then it would be a good deal.

    Sure, you can compare it to the fact that the 380 cost about what $230 or so when it was released, but the 380 has been around a good deal of time and the prices have dropped significantly.

    Also, why on earth are they using a Powercolor 390 and an MSI 970 rather than using the MSI 390?
  • TbsToy
    Oh boy, another graphics card that comes to save the day for a non question and a nonexistent need. Uhhhh, how much does it cost again?
    Walt Prill
  • turkey3_scratch
    All graphics cards are usually bad value when they're released. It's just like any product in this world. Give it 2 months and this card will be competing.
  • monsta
    another over hyped release from AMD with lack lustre performance upgrades, this is embarrassing
  • eodeo
    You would do well to look how sleek Tech power up's charts look like and how neatly per game and overall they are organized.

    More to the point of this post- you say that 380x uses virtualy the same idle power with two monitors, while TPU says its typical AMD crapscore of what it used to be- 3-4x more than it should and than you say it is. Did they make a mistake or did you?

    http://www.techpowerup.com/reviews/ASUS/R9_380X_Strix/21.html

    Also, while here why don't you still mention how poorly AMD video viewing power draw is? Along with the multi monitor power draw, video playback has been a huge negative on AMD for the past 3 years- ever since nvidia decided to fix it in 2012 on all their dx11 GPUs via drivers...