Battlefield 1 Performance In DirectX 12: 29 Cards Tested

Mainstream PC, 1920x1080

We begin our adventure with a fairly mainstream system consisting of an AMD FX-8320 processor, MSI’s 990FXA-GD80 motherboard, 16GB of G.Skill DDR3 memory operating at 1866 MT/s, and a 500GB Crucial MX200 SSD.

Low Quality Preset

Across three generations of Nvidia architectures, dropping to Battlefield 1’s Low quality preset is great for ensuring playable frame rates if you can tolerate the loss of fidelity.

One card seems to suffer more than the rest: Nvidia’s GeForce GTX 760. It’s not the only board with 2GB of GDDR5 memory, but it clearly hits a wall toward the end of our benchmark run. This is a pattern we’ll see more of as the graphics details get turned up, allowing us to draw our first conclusion of this little exercise: 2GB of memory isn’t enough for 1920x1080, even under Battlefield’s most relaxed preset.

Right out of the gate, we see AMD’s Radeon RX 480 post higher average and minimum frame rate figures than Nvidia’s GeForce GTX 1060 6GB. In fact, even the Radeon RX 470 is faster in both metrics.

At the other end of the spectrum, our frame rate over time charts show 2GB Radeon cards running into performance trouble, characterized by low minimums at the beginning of our run. This typically happens when we skip part of the cut scene, causing the level to start as assets are still being loaded. However, you'll notice it's most prevalent on our mainstream test PC. The issue is less pronounced on the Core i7-based box.

Nevertheless, our unevenness metric tells us all of these cards facilitate a fairly smooth experience.

Medium Quality Preset

Although the GeForce GTX 970 and 1060 6GB handle a shift to medium quality gracefully enough, the rest of the field gets hammered into a fairly narrow performance range. Further, we now have three 2GB cards suffering severe performance loss at certain points during the benchmark sequence. For all intents and purposes, the GeForce GTX 760, 770, and 1050 are already unplayable (despite the 1050’s reasonable average frame rate).

AMD’s line-up doesn’t escape unscathed, either. The Radeon RX 460, R9 270, and HD 7790 are all quite a bit slower than the rest of the field. However, they’re much more consistent, so our unevenness measurement shows them to be playable.

High Quality Preset

Pushing graphics quality to the next level up further separates the 2GB and 4/6GB GeForce cards. Everything from a four-year-old GeForce GTX 760 to a modern 1050 is utterly unplayable at 1920x1080. The higher-end 1050 Ti 4GB might be considered marginal, while the 1060 GB and 970 cruise along at admirable frame rates.

In comparison, the Radeon line-up scales down in a much more linear way, with none of the cards (even the 2GB ones) falling off an obvious performance cliff. A fairly new Radeon RX 460 is smooth enough, as is the 2013-era Radeon R9 280X.

Ultra Quality Preset

Although we wanted performance data using the Low, Medium, and High quality presets to establish trends and patterns, Ultra is the gold standard we shoot for when we recommend graphics cards to our readers. And at 1920x1080, only two of the GeForce cards we’re testing are really ready for Battlefield 1’s Ultra preset. Obviously, there are higher-end boards capable of going even faster. But if you own an FX-8320 or some other mainstream gaming platform, this is the scope of GPUs that make sense. Those single-digit frame rates. Those frame time results. That unevenness chart. Wow.

The Radeon RX 480 and R9 390 both outperform Nvidia’s GeForce GTX 1060 6GB, and the Radeon RX 470 slides past the GTX 970. But even AMD’s older Radeon R9 380 and 280X are relatively playable.

MORE: Best CPUs

MORE: Intel & AMD Processor Hierarchy

MORE: All CPU Content

This thread is closed for comments
47 comments
    Your comment
  • envy14tpe
    Wow. The amount of work in putting this together. Thanks, from all the BF1 gamers out there. You knocked my socks off, and are pushing me to upgrade my GPU.
  • computerguy72
    Nice article. Would have been interesting to see the 1080ti and the Ryzen 1800x mixed in there somewhere. I have a 7700k and a 980ti it would be good info to get some direction on where to take my hardware next. I'm sure other people might find that interesting too.
  • Achaios
    Good job, just remember that these "GPU showdowns" don't tell the whole story b/c cards are running at Stock, and there are GPU's that can get huge overclocks thus performing significantly better.

    Case in point: GTX 780TI

    The 780TI featured here runs at stock which was 875 MHz Base Clock and 928 MHz Boost Clock, whereas the 3rd party GPU's produced ran at 1150 MHz and boosted up to 1250-1300 MHz. We are talking about 30-35% more performance here for this card which you ain't seeing here at all.
  • xizel
    Great write up, just a shame you didnt use any i5 CPUS, i would of really liked to se how an i5 6600k competes with its 4 cores agains the HT i7s
  • Verrin
    Wow, impressive results from AMD here. You can really see that Radeon FineWine­™ tech in action.
  • And then you run in DX11 mode and it runs faster than DX12 across the board. Thanks for effort you put in this but rather pointless since DX12 has been nothing but pile of crap.
  • pokeman
    Why do my 680oc 2gb sli run this at 100hz 3440x1440? 2133 gskill, 4770k 4.2gz
  • NewbieGeek
    @XIZEL My i5 6600k @4.6ghz and rx 480 get 80-90 fps max settings on all 32 v 32 multiplayer maps with very few spikes either up or down.
  • ohim
    780Ti below a R9-290 3 years down the road ...
  • Jupiter-18
    Fascinating stuff! Love that you are still including the older models in your benchmarks, makes for great info for a budget gamer like myself! In fact, this may help me determine what goes in my budget build I'm working on right now, which I was going to have dual 290x (preferably 8gb if I can find them), but now might have something else.
  • dfg555
    It's funny how at 4K the Fury and Fury X are able to match 980 Ti speeds yet the game is utilizing way more than 4GB of VRAM. HBM doing its' wonders.

    Also what happened to Kepler, RIP 780 Ti vs 290X.
  • Achaios
    Like I wrote above, the GTX 780TI they have here is running a stock which was 875/928 Mhz. A third party GTX 780TI such as the Gigabyte GTX 780TI GHz Edition that boosts to 1240 MHz, scores 13540 3D Mark Firestrike Graphics score, which is just 20 marks less or so than the R9 390X at 3D mark Firestrike performance results, and significantly faster than the R9 290X, R9 470, R9 480 and GTX 1060 6GB.
    http://imgur.com/KIP0MRt 3D mark Firestrike results here: https://www.futuremark.com/hardware/gpu
  • sunny420
    Awesome work folks! The data!!
    The only thing I felt was missing was as Xizel mentioned. It would have been great to see an I5 included in the Scaling: CPU Core Count chart. All of the I7s perform similarly, with only one I3 outlier for data points. It would have been nice to see a middle-of-the-road offering in one of the I5s.
  • damric
    Should have kept Hyper Threading enabled on the i3 since the whole point of the i3's existence is Hyper Threading.

    1st gen GCN really pulled ahead of Kepler over the years.
  • atomicWAR
    I would have liked to see some actually i5s in the mix. While I get disabling hyper-threading emulates them to a degree, I can't tell you how many posts in the forums I have responded to with i5s claiming/or troubleshooting that ended up with a CPU bottleneck in this game, especially in multiplayer. Folks running good GPUs (everything from RX 480s to GTX 1080s) getting 100% CPU usage or close (again the worst of it was in multiplayer). Ultimately I feel like this article says in-directly 4C/4T is enough when every day posts in the forums say the opposite. While I know you could never get a fully accurate benchmark in multiplayer, I would like to see an article on core scaling in multiplayer all the same. It would have to be more about the reviewers impression of how smooth game play is but doing some benchmarks that have CPU utilization and frame variance/ frame rate would be useful in helping those with i5s (or any 4C/4T CPU) figure out if their experience is being hindered or is typical compared to other 4C/4T users. This article had a ton of info but I feel it only scratched the surface of what gamers are dealing with in real life.
  • Kawi6rr
    Wow a lot of work went into this very well done! Go AMD, I like how they get better with age lol. Looks like my next card will be the new 580's coming out.
  • thefiend1
    Does increasing the quality settings help the color banding in the gradients of the smoke, fog, etc? On XB1 the banding is horrible in some cases and im curious if that issue is the same on PC.
  • David_693
    Would have also liked to have seen the AMD 295x2 in the mix as well...
  • MaCk0y
    146991 said:
    I would have liked to see some actually i5s in the mix. While I get disabling hyper-threading emulates them to a degree, I can't tell you how many posts in the forums I have responded to with i5s claiming/or troubleshooting that ended up with a CPU bottleneck in this game, especially in multiplayer. Folks running good GPUs (everything from RX 480s to GTX 1080s) getting 100% CPU usage or close (again the worst of it was in multiplayer). Ultimately I feel like this article says in-directly 4C/4T is enough when every day posts in the forums say the opposite. While I know you could never get a fully accurate benchmark in multiplayer, I would like to see an article on core scaling in multiplayer all the same. It would have to be more about the reviewers impression of how smooth game play is but doing some benchmarks that have CPU utilization and frame variance/ frame rate would be useful in helping those with i5s (or any 4C/4T CPU) figure out if their experience is being hindered or is typical compared to other 4C/4T users. This article had a ton of info but I feel it only scratched the surface of what gamers are dealing with in real life.


    I agree. My Core i5-4690k at 4.8GHz is between 95-100% usage in multiplayer.
  • DerekA_C
    I get about 72% usage on my 4790k at 4.4ghz in multi 64 player server
  • IceMyth
    Just a question, why did you try to use Rx480 in cross fire since the price of 2 RX480 is the same price as 1 1080Ti, this would be interesting and i think this is the scenario that AMD used when they lunched their RX family.
  • cryoburner
    1328515 said:
    Case in point: GTX 780TI The 780TI featured here runs at stock which was 875 MHz Base Clock and 928 MHz Boost Clock, whereas the 3rd party GPU's produced ran at 1150 MHz and boosted up to 1250-1300 MHz. We are talking about 30-35% more performance here for this card which you ain't seeing here at all.
    1328515 said:
    Like I wrote above, the GTX 780TI they have here is running a stock which was 875/928 Mhz. A third party GTX 780TI such as the Gigabyte GTX 780TI GHz Edition that boosts to 1240 MHz, scores 13540 3D Mark Firestrike Graphics score, which is just 20 marks less or so than the R9 390X at 3D mark Firestrike performance results, and significantly faster than the R9 290X, R9 470, R9 480 and GTX 1060 6GB. http://imgur.com/KIP0MRt 3D mark Firestrike results here: https://www.futuremark.com/hardware/gpu


    The performance difference isn't nearly that great. I had a look at the GTX 780Ti "GHz Edition" reviews, and benchmarks showed it performing around 15% faster than a stock 780Ti when it wasn't CPU limited. 30% higher clocks does not necessarily equal 30% more performance. Assuming the cards used in these benchmarks were at stock clocks, then the best you could expect from the GHz Edition would be right around the GTX 970's performance level at anything above "low" settings.

    Also, it should be pointed out that most 780 Tis didn't run anywhere near those clocks. You can't take the highest-OCed version of a graphics card and imply that was the norm for third-party cards. And if we take into account overclocked versions of the other cards, then the overall standings probably wouldn't change much. The 780Ti likely just isn't able to handle DX12 as well as these newer cards, particularly AMD's.

    It might have been nice if this performance comparison also tested DX11 mode though, since I know Nvidia's cards took a performance hit for DX12 back at the game's launch. I was also a bit surprised to see how poorly Nvidia's 2GB cards fared here though, while AMD's seemed to handle the lack of VRAM more gracefully. The 2GB GTX 1050 dropped below 30fps for an extended length of time even at medium settings, and all of Nvidia's 2GB cards plummeted to single digits at anything higher than that. Meanwhile, the 2GB Radeons stayed above 30fps even at high settings. It kind of makes me wonder how these 3GB GTX 1060s will fare a year or so from now, especially when you consider that the RX 480s and even 470s all come equipped with at least 4GB.
  • falchard
    DX12 games tend to scale to more GPUs better than previous generations. However this may be a difference in early adopter developers unlocking everything.
  • renz496
    1048889 said:
    It's funny how at 4K the Fury and Fury X are able to match 980 Ti speeds yet the game is utilizing way more than 4GB of VRAM. HBM doing its' wonders. Also what happened to Kepler, RIP 780 Ti vs 290X.


    with Fury AMD need to make special optimization on the VRAM so the VRAM usage did not exceed the 4GB capacity. something similar can also be done on GDDR5 memory if you want to. i have GTX660 and GTX960 (both 2GB model). on the same graphical setting the 660 will use much less VRAM than GTX960. that's because GTX660 have weird memory configuration that nvidia for their part try to fit the memory on the first 1.5GB portion first. that's why AMD create HBCC with Vega so they no longer need to tweak the VRAM usage per game basis.

    as for what happen to 780ti vs 290X that's what happen when you win majority of the console hardware contract. but most often nvidia kepler still very competitive to it's respective Radeon counter part for tittles that is PC exclusive or coming out on PC first console second. take this for example:

    http://gamegpu.com/rts-/-%D1%81%D1%82%D1%80%D0%B0%D1%82%D0%B5%D0%B3%D0%B8%D0%B8/sid-meier-s-civilization-vi-test-gpu

    http://gamegpu.com/action-/-fps-/-tps/shadow-warrior-2-test-gpu

    nvidia to certain extend try to fix the issue they have with kepler with maxwell but they know that will not going to be enough when AMD keep directing more and more games development towards their hardware core strength with new console hardware win. that's why nvidia is back in the console business with Nintendo Switch.