AMD Radeon R9 380X Nitro Launch Review

QHD (2560x1440) Gaming Results

AMD is making a lot of noise about the Radeon R9 380X being designed for 2560x1440. Consequently, the company can’t really blame us for running our benchmarks at QHD using the settings that PC gamers want to see. Since AMD also says its Radeon R9 380 is a true FHD-oriented graphics card, we need to examine how these claims hold up in light of the six to ten percent delta between the two cards.

The Witcher 3: Wild Hunt

The difference between AMD’s Radeon R9 380X and 380 is nine percent in this benchmark at QHD. That's up from six percent at FHD. The game certainly isn’t playable using either graphics card without significantly lowering some of the settings, though.

Grand Theft Auto V

Again, the Radeon R9 380X’s advantage over the 380 increases at the higher resolution. This time it rises from five to 11 percent. Forty FPS isn’t a great result, but it’s still considered playable. Sixty FPS could be achieved with a few lowered graphics settings.

Metro: Last Light

AMD’s new graphics card manages to hold onto the nine percent lead over the 380 that we saw at 1920x1080. We’re puzzled again by the fact that the theoretically slower Nvidia GeForce GTX 960 is able to keep up and finish right between AMD’s two similar boards. Without tessellation, AMD’s offerings would likely come out ahead, though.

Then again, the exact order doesn’t really matter since none of these cards provide a smooth gaming experience. At this resolution, you'd want to use the Medium graphics preset and a dialed-back tessellation setting to make AMD's Radeon R9 380X more playable.

Bioshock Infinite

The Radeon R9 380X increases its lead over the 380 again, this time from six to nine percent. That still isn't noticeable when you actually play the game, though. Both graphics cards produce playable results, and the 380X’s frame times aren't appreciably better.

Tomb Raider

AMD's Radeon R9 380X is 13 percent faster than the X-less 380 in this benchmark at QHD, up from almost 10 percent in FHD. Tomb Raider also marks the first example of a test where the two cards yield a noticeably different subjective experience.

Battlefield 4 (Campaign)

Even though Nvidia’s GeForce GTX 960 beats the AMD Radeon R9 380 once again in Battlefield 4, the 380X beats it in turn to the tune of 10 percent. Unfortunately, none of the cards are actually playable. For that, you'd need to drop the quality preset a couple of notches.

Middle-Earth: Shadow of Mordor

This is the first time the Radeon R9 380X’s advantage over the 380 actually shrinks. It was 16 percent at Full HD, and now it's 13 percent at QHD. Again, you'd need to compromise graphics quality to make Middle-earth playable at 2560x1440.

Thief

The Radeon R9 380X almost doubles its lead from eight to 15 percent. The difference still isn’t really all that noticeable during actual gameplay though, which is generally very choppy. Lowered graphics settings would give every contender a much-needed boost, of course.

Nvidia’s GeForce GTX 960 doesn’t stand a chance, likely due to its 2GB of GDDR5. Unfortunately, we didn’t have a model on-hand with more graphics memory.

Ashes of the Singularity

We’re again looking at the render times of individual frames from the different views. The total rendering time is congruent with how demanding the benchmark scenes are.

And again, we report the number of CPU calls, along with the ratio of rendered frames.

Bottom Line

We wouldn’t go so far as to call AMD's Radeon R9 380X unsuitable for 2560x1440, but there are certainly settings you won't get to enable if you try to make QHD happen. Maxed-out presets are simply too demanding. Drop the slider a few levels, though, and the new card should generally provide a playable experience.

The one problem we have with this situation is that you could say the same thing about AMD's cheaper Radeon R9 380. The difference between them is never large enough to make a practical difference. Depending on the title and the graphics settings, the 380X might be a bit faster or a bit less slow, but that’s about it.

This thread is closed for comments
95 comments
    Your comment
  • ingtar33
    so full tonga, release date 2015; matches full tahiti, release date 2011.

    so why did they retire tahiti 7970/280x for this? 3 generations of gpus with the same rough number scheme and same performance is sorta sad.
  • wh3resmycar
    this has all the features, freesync, trueaudio, etc.
  • logainofhades
    Features, and power ratings are lower. Tonga is 190w, vs 250w for Tahiti.
  • Eggz
    Seems underwhelming until you read the price. Pretty good for only $230! It's not that much slower than the 970, but it's still about $60 cheaper. Well placed.
  • chaosmassive
    been waiting for this card review, I saw photographer fingers on silicon reflection btw !
  • Onus
    Once again, it appears that the relevance of a card is determined by its price (i.e. price/performance, not just performance). There are no bad cards, only bad prices. That it needs two 6-pin PCIe power connections rather than the 8-pin plus 6-pin needed by the HD7970 is, however, a step in the right direction.
  • FormatC
    Quote:
    I saw photographer fingers on silicon


    I know, this are my fingers and my wedding ring. :P
    Call it a unique watermark. ;)
  • psycher1
    Honestly I'm getting a bit tired of people getting so over-enthusiastic about which resolutions their cards can handle. I barely think the 970 is good enough for 1080p.

    With my 2560*1080 (to be fair, 33% more pixels) panel and a 970, I can almost never pull off ultimate graphic settings out of modern games, with the Witcher 3 only averaging about 35fps while at medium-high according to GeForce Experience.

    If this is already the case, give it a year or two. Future proofing does not mean you should need to consider sli after only 6 months and a minor display upgrade.
  • Eggz
    1272112 said:
    Honestly I'm getting a bit tired of people getting so over-enthusiastic about which resolutions their cards can handle. I barely think the 970 is good enough for 1080p. With my 2560*1080 (to be fair, 33% more pixels) panel and a 970, I can almost never pull off ultimate graphic settings out of modern games, with the Witcher 3 only averaging about 35fps while at medium-high according to GeForce Experience. If this is already the case, give it a year or two. Future proofing does not mean you should need to consider sli after only 6 months and a minor display upgrade.


    Yeah, I definitely think that the 980 ti, Titan X, FuryX, and Fury Nano are the first cards that adequately exceed 1080p. No cards before those really allow the user to forget about graphics bottlenecks at a higher standard resolution. But even with those, 1440p is about the most you can do when your standards are that high. I consider my 780 ti almost perfect for 1080p, though it does bottleneck here and there in 1080p games. Using 4K without graphics bottlenecks is a lot further out than people realize.
  • ByteManiak
    everyone is playing GTA V and Witcher 3 in 4K at 30 fps and i'm just sitting here struggling to get a TNT2 to run Descent 3 at 60 fps in 800x600 on a Pentium 3 machine
  • blazorthon
    Quote:
    Yeah, I definitely think that the 980 ti, Titan X, FuryX, and Fury Nano are the first cards that adequately exceed 1080p. No cards before those really allow the user to forget about graphics bottlenecks at a higher standard resolution. But even with those, 1440p is about the most you can do when your standards are that high. I consider my 780 ti almost perfect for 1080p, though it does bottleneck here and there in 1080p games. Using 4K without graphics bottlenecks is a lot further out than people realize.


    The games we have today also happen to be the first games that pushed cards like the 7970 and the 680 to no longer be adequate for 1440p, let alone 1080p with settings maxed out. It's a pointless chicken and the egg argument.
  • ErikVinoya
    With those temps and the backplate, I think with a bit of oil, I can make breakfast while running furmark on that
  • blazorthon
    1027081 said:
    so full tonga, release date 2015; matches full tahiti, release date 2011. so why did they retire tahiti 7970/280x for this? 3 generations of gpus with the same rough number scheme and same performance is sorta sad.


    I fail to see how lower power consumption, new features, higher memory capacity, and lower prices (both to manufacture and for us to buy) are sad. It isn't as if the 380X is now the high end. It is just a mid-ranged card that is replacing a formerly high-end card. This happens every graphics card generation. GTX 960 and GTX 580, Radeon 7850 and Radeon 5870, the list goes on and on. Furthermore, the 380X has a considerable performance advantage over the 7970 despite having the same core count because core count is far from everything and many other things changed from Tahiti to Tonga/Antigua. The 380X is easily about 20% faster than the 7970.
  • eklipz330
    your perception of what is enough for 1080p may differ from the author's. think about it. my r9 290 could handle most games at 1080p above 60fps, and many of them above 100fps. that is a VERY capable card at 1080p, and probably more than enough for the average gamer. not everyone has the itch to max every setting out; i personally like lowering the resolution and increasing AA.
  • Eggz
    412399 said:
    The games we have today also happen to be the first games that pushed cards like the 7970 and the 680 to no longer be adequate for 1440p, let alone 1080p with settings maxed out. It's a pointless chicken and the egg argument.


    That's probably true to a certain extent, but I'm not sure I entirely agree. You'll be able to run today's most demanding games, and future games, with max setting at low resolutions on current- and past-gen cards. If you take the chicken and egg idea to its logical end, then there would eventually be a game that not even the most powerful card could not run at even at the lowest resolution possible. But that's not what happens. With a good CPU, there are cards that will be able to run anything at (for example) 640x480, no matter what settings, and no matter which game.

    There's just an upper limit to processing requirements based on resolution and a target frame rate. Games don't usually reach that upper limit, but there is one. Ray tracing might be the best real-world approximation of demanding the most out of a certain resolution and frame rate. Whatever that limit is, though, there will be a card that can handle it for lower resolutions. If that weren't true, then someone would be able to build a color switching app for one (1) pixel, a resolution of 1x1, that could bring down the Titan X to less than 60 fps at all times. I don't think that would be possible with proper coding applied.

    That plays out in the real world by certain cards being able to handle any game, at any setting, at a certain resolution while maintaining a targeted frame rate. I'd bet $1,000 that a resolution of 16x9 would be incapable of bringing the frame rate below 60 fps for any game to exist in the foreseeable future using a flagship graphics card (setting aside incompatibilities and non-graphics bottlenecks).

    I could be totally wrong about there being an upper limit on processing requirements within a bound resolution and minimum frame rate, and I'm open to being shown why that upper limit theory isn't true. But I'm inclined to think it's true until I see exactly where I'm off.
  • red77star
    My HD7970 Crossfire setup still going strong. Every single game maxed on 1080p
  • spentshells
    I am dissapoint, I was expecting the full 384 mem bus and substantially higher performance. I had been hyping this for a while as the reason for NV to release the 960 ti's and it most certainly won't be.
  • blazorthon
    1406980 said:
    That's probably true to a certain extent, but I'm not sure I entirely agree. You'll be able to run today's most demanding games, and future games, with max setting at low resolutions on current- and past-gen cards. If you take the chicken and egg idea to its logical end, then there would eventually be a game that not even the most powerful card could not run at even at the lowest resolution possible. But that's not what happens. With a good CPU, there are cards that will be able to run anything at (for example) 640x480, no matter what settings, and no matter which game. There's just an upper limit to processing requirements based on resolution and a target frame rate. Games don't usually reach that upper limit, but there is one. Ray tracing might be the best real-world approximation of demanding the most out of a certain resolution and frame rate. Whatever that limit is, though, there will be a card that can handle it for lower resolutions. If that weren't true, then someone would be able to build a color switching app for one (1) pixel, a resolution of 1x1, that could bring down the Titan X to less than 60 fps at all times. I don't think that would be possible with proper coding applied. That plays out in the real world by certain cards being able to handle any game, at any setting, at a certain resolution while maintaining a targeted frame rate. I'd bet $1,000 that a resolution of 16x9 would be incapable of bringing the frame rate below 60 fps for any game to exist in the foreseeable future using a flagship graphics card (setting aside incompatibilities and non-graphics bottlenecks). I could be totally wrong about there being an upper limit on processing requirements within a bound resolution and minimum frame rate, and I'm open to being shown why that upper limit theory isn't true. But I'm inclined to think it's true until I see exactly where I'm off.


    Quote:
    Yeah, I definitely think that the 980 ti, Titan X, FuryX, and Fury Nano are the first cards that adequately exceed 1080p. No cards before those really allow the user to forget about graphics bottlenecks at a higher standard resolution. But even with those, 1440p is about the most you can do when your standards are that high. I consider my 780 ti almost perfect for 1080p, though it does bottleneck here and there in 1080p games. Using 4K without graphics bottlenecks is a lot further out than people realize.


    Go back a few years, say to when the 7970 GHz edition launched so we have pretty much all of that generation's cards out and most driver kinks were worked out on both sides. The 7970 cards and the 680 were capable of running pretty much all games in 1440p in ultra and the 7850/650 Ti Boost could run pretty much all games in 1080p in ultra, all assuming there wasn't another bottleneck (such as a weak CPU).

    So, the current high end cards were not the first cards to be adequate for 1440p, let alone 1080p. Today's top end cards are merely the first to be able to run today's most intensive games in their heavier settings at 1080p and 1440p. This isn't just true to some extent because the older reviews prove it (have a look at some Radeon 7970 GHz Edition reviews for a good lineup). This is what I meant by chicken and the egg. Current top-end cards don't let you "forget about graphics bottlenecks at a higher standard resolution [1440p]," anymore than top end cards from previous generations did with corresponding games.

    I don't see how low resolutions, settings, and any performance limits related to them and CPUs are relevant to this.

    Anything with a resolution of one pixel might be useless for this topic. I don't think things like tessellation, AA, AF, and so on can be performed on a resolution of one pixel. In that case, yes, there might be a limit to how much performance can be necessary before it becomes arbitrary, but I say that out of not knowing what modern features can really do with only one pixel. With a real resolution, let's use a low one like 640x480, you can always apply more and more settings, filters, etc. until you bog down even Titan X. Whether or not a game happens to support applying enough of these to such a resolution is another matter, but it can be done if some dev wanted to make such a program. The real limit is at what point you're throwing more and more resources at the problem without a discernible improvement in visual quality. For example, we could make a game that runs at 640x480 with 1024x MSAA or something like that, but why bother when you won't see any benefit from a minuscule fraction of that?
  • psycher1
    So we're at least semi agreeing that, limiting the discussion to modern titles and looking forward, cards like the title articles 380x are most definitely NOT 1440p cards, and at best are 1080p cards that are already staring that limit in the face as well.

    With the 680 being talked about in the past tense so soon, I wonder if I didn't even underestimate how long my 970 will last.

    Anybody here think that dx12 and other such software improvements will help keep these cards relevant? Or are we still looking at a ~3 year upgrade cycle to just run games at decent settings, even with mostly top-of-the-line hardware today?
  • turkey3_scratch
    I don't think this card cuts it with the $230-240 price tag, when a 380 can be found for $170. That is 35% more price for about 8% more performance. However, I would not be surprised if the 380X slowly goes down to below $200 and knocks the 380 out of sale. If the 380X was piced at $200 then it would be a good deal.

    Sure, you can compare it to the fact that the 380 cost about what $230 or so when it was released, but the 380 has been around a good deal of time and the prices have dropped significantly.

    Also, why on earth are they using a Powercolor 390 and an MSI 970 rather than using the MSI 390?
  • TbsToy
    Oh boy, another graphics card that comes to save the day for a non question and a nonexistent need. Uhhhh, how much does it cost again?
    Walt Prill
  • turkey3_scratch
    All graphics cards are usually bad value when they're released. It's just like any product in this world. Give it 2 months and this card will be competing.
  • monsta
    another over hyped release from AMD with lack lustre performance upgrades, this is embarrassing
  • eodeo
    You would do well to look how sleek Tech power up's charts look like and how neatly per game and overall they are organized.

    More to the point of this post- you say that 380x uses virtualy the same idle power with two monitors, while TPU says its typical AMD crapscore of what it used to be- 3-4x more than it should and than you say it is. Did they make a mistake or did you?

    http://www.techpowerup.com/reviews/ASUS/R9_380X_Strix/21.html

    Also, while here why don't you still mention how poorly AMD video viewing power draw is? Along with the multi monitor power draw, video playback has been a huge negative on AMD for the past 3 years- ever since nvidia decided to fix it in 2012 on all their dx11 GPUs via drivers...