Battlefield 1 Performance In DirectX 12: 29 Cards Tested

High-End PC, 2560x1440

Low Quality Preset

Our 2560x1440 results compare nine different cards from AMD and nine others from Nvidia. Even then, we had to cut some of the cards we benchmarked. There are just a ton of viable options for the range of quality settings at this resolution that might make sense with a Core i7-6700K-based platform.  

Dropping all the way to Low quality, a Titan X continues to peg Battlefield 1’s 200 FPS ceiling with GeForce GTX 1080 not far behind.

Notice that the GeForce cards’ minimum frame rates are, at most, 20 FPS or so behind the averages. Flip over to the frame rate over time chart, and you’ll see relatively consistent trend lines. We’ll compare those results to AMD’s Radeons in the next set of charts because they differ from each other quite a bit.

In any case, even a GeForce GTX 780 Ti averages over 100 FPS at QHD. Playability isn’t an issue on a fast card using this game’s Low preset.

AMD’s last few generations also kick back average frame rates in excess of 100 FPS. But notice their minimums are all over the place. The average frame rate over time chart shows the HBM-equipped Fury X and Fury launching right into our benchmark sequence without missing a beat. But the boards with GDDR5 take a few seconds to get up to speed. The only exception is Radeon RX 470, which starts higher than several faster cards, but ultimately remains the chart’s slowest contender.

Certain events trigger frame time spikes across the Radeon line-up. We’re thinking these correspond to explosions during our run that “shake” the camera. The GeForce cards experience a similar phenomenon, though the effects don’t appear as pronounced or as frequent.

Medium Quality Preset

A step up to the Medium preset has a big impact on how Battlefield 1 looks, and it affects performance just as significantly. Even still, our slowest card, the GeForce GTX 780 Ti, keeps its nose above 60 FPS through the whole run. We’re waiting to see if the 780 Ti’s 3GB of memory becomes a more prominent bottleneck at higher detail settings or resolutions. But for now, a $500 GeForce GTX 1080 is almost exactly twice as fast as the 780 Ti, which sold for $700 three years ago.

More demanding detail settings smooth out the performance inconsistency most Radeon cards experienced during the first few seconds of our benchmark sequence.

Both Fiji-based boards enjoy a significant advantage over the rest of AMD’s portfolio. The Radeon R9 Fury X does battle with Nvidia’s GeForce GTX 1070, while the vanilla Fury is a bit faster than GeForce GTX 980 Ti.

Back in 2015, GeForce GTX 980 Ti was generally quicker than Radeon R9 Fury X. So it’s a pretty big deal that two years later, AMD beats that same card with its Radeon R9 Fury.

Meanwhile, the Radeon RX 480 8GB is quicker than GeForce GTX 980 and 1060 6GB.  

High Quality Preset

Gamers sinking big bucks into high-end hardware want to see games the way their developers intended, with graphics quality options as high as possible. By simply dialing up to the High preset, rather than Medium, a mid-range card like GeForce GTX 1060 6GB sheds ~31% of its average frame rate, dropping from 84.5 to 58.1 FPS. With that said, even a GeForce GTX 780 Ti maintains >40 FPS through our run.

AMD’s fastest cards can’t compete with Nvidia’s, but the Radeon R9 Fury X does fare well against the GeForce GTX 1070. Unfortunately, Fiji-based boards are no longer readily available.

The Ellesmere-based Radeon RX 480 and 470 are, though. Both serve up playable performance at 2560x1440 using Battlefield 1’s High preset. Radeon RX 480 posts similar frame rates as a GeForce GTX 980, narrowly beating the 1060 6GB.

Our frame time over time charts show both HBM-equipped Fiji boards enduring small frame time spikes at regular intervals, similar to what we just saw from Nvidia’s GeForce GTX 780 Ti. These artifacts are interesting because they persist as we shift to Ultra quality, but are deemphasized as other influences cause much more significant frame time variance. Those aren’t the only 4GB cards we’re testing, so it’s not clear what imposes the evenly-spaced spikes.

Ultra Quality Preset

Our unevenness index makes the case that all of these GeForce cards serve up consistent-enough performance to be considered playable. The GeForce GTX 1060 6GB, 980 Ti, 970, and 780 Ti incur some fairly significant frame time spikes at similar points in our benchmark run. Incidentally, those are the cards at the bottom of the aforementioned index, which reflects smoothness.

The Fiji-based cards continue beating GeForce GTX 980 Ti, but the R9 Fury X now shows up behind GeForce GTX 1070.

Big frame time variance at the start of our run takes a toll on minimum frame rates and affects the smoothness metric, where five Radeon cards demonstrate noticeable stutter. Even still, most cards are playable.

MORE: Best Graphics Cards

MORE: Desktop GPU Performance Hierarchy Table

MORE: All Graphics Content

This thread is closed for comments
47 comments
    Your comment
  • envy14tpe
    Wow. The amount of work in putting this together. Thanks, from all the BF1 gamers out there. You knocked my socks off, and are pushing me to upgrade my GPU.
  • computerguy72
    Nice article. Would have been interesting to see the 1080ti and the Ryzen 1800x mixed in there somewhere. I have a 7700k and a 980ti it would be good info to get some direction on where to take my hardware next. I'm sure other people might find that interesting too.
  • Achaios
    Good job, just remember that these "GPU showdowns" don't tell the whole story b/c cards are running at Stock, and there are GPU's that can get huge overclocks thus performing significantly better.

    Case in point: GTX 780TI

    The 780TI featured here runs at stock which was 875 MHz Base Clock and 928 MHz Boost Clock, whereas the 3rd party GPU's produced ran at 1150 MHz and boosted up to 1250-1300 MHz. We are talking about 30-35% more performance here for this card which you ain't seeing here at all.
  • xizel
    Great write up, just a shame you didnt use any i5 CPUS, i would of really liked to se how an i5 6600k competes with its 4 cores agains the HT i7s
  • Verrin
    Wow, impressive results from AMD here. You can really see that Radeon FineWine­™ tech in action.
  • And then you run in DX11 mode and it runs faster than DX12 across the board. Thanks for effort you put in this but rather pointless since DX12 has been nothing but pile of crap.
  • pokeman
    Why do my 680oc 2gb sli run this at 100hz 3440x1440? 2133 gskill, 4770k 4.2gz
  • NewbieGeek
    @XIZEL My i5 6600k @4.6ghz and rx 480 get 80-90 fps max settings on all 32 v 32 multiplayer maps with very few spikes either up or down.
  • ohim
    780Ti below a R9-290 3 years down the road ...
  • Jupiter-18
    Fascinating stuff! Love that you are still including the older models in your benchmarks, makes for great info for a budget gamer like myself! In fact, this may help me determine what goes in my budget build I'm working on right now, which I was going to have dual 290x (preferably 8gb if I can find them), but now might have something else.
  • dfg555
    It's funny how at 4K the Fury and Fury X are able to match 980 Ti speeds yet the game is utilizing way more than 4GB of VRAM. HBM doing its' wonders.

    Also what happened to Kepler, RIP 780 Ti vs 290X.
  • Achaios
    Like I wrote above, the GTX 780TI they have here is running a stock which was 875/928 Mhz. A third party GTX 780TI such as the Gigabyte GTX 780TI GHz Edition that boosts to 1240 MHz, scores 13540 3D Mark Firestrike Graphics score, which is just 20 marks less or so than the R9 390X at 3D mark Firestrike performance results, and significantly faster than the R9 290X, R9 470, R9 480 and GTX 1060 6GB.
    http://imgur.com/KIP0MRt 3D mark Firestrike results here: https://www.futuremark.com/hardware/gpu
  • sunny420
    Awesome work folks! The data!!
    The only thing I felt was missing was as Xizel mentioned. It would have been great to see an I5 included in the Scaling: CPU Core Count chart. All of the I7s perform similarly, with only one I3 outlier for data points. It would have been nice to see a middle-of-the-road offering in one of the I5s.
  • damric
    Should have kept Hyper Threading enabled on the i3 since the whole point of the i3's existence is Hyper Threading.

    1st gen GCN really pulled ahead of Kepler over the years.
  • atomicWAR
    I would have liked to see some actually i5s in the mix. While I get disabling hyper-threading emulates them to a degree, I can't tell you how many posts in the forums I have responded to with i5s claiming/or troubleshooting that ended up with a CPU bottleneck in this game, especially in multiplayer. Folks running good GPUs (everything from RX 480s to GTX 1080s) getting 100% CPU usage or close (again the worst of it was in multiplayer). Ultimately I feel like this article says in-directly 4C/4T is enough when every day posts in the forums say the opposite. While I know you could never get a fully accurate benchmark in multiplayer, I would like to see an article on core scaling in multiplayer all the same. It would have to be more about the reviewers impression of how smooth game play is but doing some benchmarks that have CPU utilization and frame variance/ frame rate would be useful in helping those with i5s (or any 4C/4T CPU) figure out if their experience is being hindered or is typical compared to other 4C/4T users. This article had a ton of info but I feel it only scratched the surface of what gamers are dealing with in real life.
  • Kawi6rr
    Wow a lot of work went into this very well done! Go AMD, I like how they get better with age lol. Looks like my next card will be the new 580's coming out.
  • thefiend1
    Does increasing the quality settings help the color banding in the gradients of the smoke, fog, etc? On XB1 the banding is horrible in some cases and im curious if that issue is the same on PC.
  • David_693
    Would have also liked to have seen the AMD 295x2 in the mix as well...
  • MaCk0y
    146991 said:
    I would have liked to see some actually i5s in the mix. While I get disabling hyper-threading emulates them to a degree, I can't tell you how many posts in the forums I have responded to with i5s claiming/or troubleshooting that ended up with a CPU bottleneck in this game, especially in multiplayer. Folks running good GPUs (everything from RX 480s to GTX 1080s) getting 100% CPU usage or close (again the worst of it was in multiplayer). Ultimately I feel like this article says in-directly 4C/4T is enough when every day posts in the forums say the opposite. While I know you could never get a fully accurate benchmark in multiplayer, I would like to see an article on core scaling in multiplayer all the same. It would have to be more about the reviewers impression of how smooth game play is but doing some benchmarks that have CPU utilization and frame variance/ frame rate would be useful in helping those with i5s (or any 4C/4T CPU) figure out if their experience is being hindered or is typical compared to other 4C/4T users. This article had a ton of info but I feel it only scratched the surface of what gamers are dealing with in real life.


    I agree. My Core i5-4690k at 4.8GHz is between 95-100% usage in multiplayer.
  • DerekA_C
    I get about 72% usage on my 4790k at 4.4ghz in multi 64 player server
  • IceMyth
    Just a question, why did you try to use Rx480 in cross fire since the price of 2 RX480 is the same price as 1 1080Ti, this would be interesting and i think this is the scenario that AMD used when they lunched their RX family.
  • cryoburner
    1328515 said:
    Case in point: GTX 780TI The 780TI featured here runs at stock which was 875 MHz Base Clock and 928 MHz Boost Clock, whereas the 3rd party GPU's produced ran at 1150 MHz and boosted up to 1250-1300 MHz. We are talking about 30-35% more performance here for this card which you ain't seeing here at all.
    1328515 said:
    Like I wrote above, the GTX 780TI they have here is running a stock which was 875/928 Mhz. A third party GTX 780TI such as the Gigabyte GTX 780TI GHz Edition that boosts to 1240 MHz, scores 13540 3D Mark Firestrike Graphics score, which is just 20 marks less or so than the R9 390X at 3D mark Firestrike performance results, and significantly faster than the R9 290X, R9 470, R9 480 and GTX 1060 6GB. http://imgur.com/KIP0MRt 3D mark Firestrike results here: https://www.futuremark.com/hardware/gpu


    The performance difference isn't nearly that great. I had a look at the GTX 780Ti "GHz Edition" reviews, and benchmarks showed it performing around 15% faster than a stock 780Ti when it wasn't CPU limited. 30% higher clocks does not necessarily equal 30% more performance. Assuming the cards used in these benchmarks were at stock clocks, then the best you could expect from the GHz Edition would be right around the GTX 970's performance level at anything above "low" settings.

    Also, it should be pointed out that most 780 Tis didn't run anywhere near those clocks. You can't take the highest-OCed version of a graphics card and imply that was the norm for third-party cards. And if we take into account overclocked versions of the other cards, then the overall standings probably wouldn't change much. The 780Ti likely just isn't able to handle DX12 as well as these newer cards, particularly AMD's.

    It might have been nice if this performance comparison also tested DX11 mode though, since I know Nvidia's cards took a performance hit for DX12 back at the game's launch. I was also a bit surprised to see how poorly Nvidia's 2GB cards fared here though, while AMD's seemed to handle the lack of VRAM more gracefully. The 2GB GTX 1050 dropped below 30fps for an extended length of time even at medium settings, and all of Nvidia's 2GB cards plummeted to single digits at anything higher than that. Meanwhile, the 2GB Radeons stayed above 30fps even at high settings. It kind of makes me wonder how these 3GB GTX 1060s will fare a year or so from now, especially when you consider that the RX 480s and even 470s all come equipped with at least 4GB.
  • falchard
    DX12 games tend to scale to more GPUs better than previous generations. However this may be a difference in early adopter developers unlocking everything.
  • renz496
    1048889 said:
    It's funny how at 4K the Fury and Fury X are able to match 980 Ti speeds yet the game is utilizing way more than 4GB of VRAM. HBM doing its' wonders. Also what happened to Kepler, RIP 780 Ti vs 290X.


    with Fury AMD need to make special optimization on the VRAM so the VRAM usage did not exceed the 4GB capacity. something similar can also be done on GDDR5 memory if you want to. i have GTX660 and GTX960 (both 2GB model). on the same graphical setting the 660 will use much less VRAM than GTX960. that's because GTX660 have weird memory configuration that nvidia for their part try to fit the memory on the first 1.5GB portion first. that's why AMD create HBCC with Vega so they no longer need to tweak the VRAM usage per game basis.

    as for what happen to 780ti vs 290X that's what happen when you win majority of the console hardware contract. but most often nvidia kepler still very competitive to it's respective Radeon counter part for tittles that is PC exclusive or coming out on PC first console second. take this for example:

    http://gamegpu.com/rts-/-%D1%81%D1%82%D1%80%D0%B0%D1%82%D0%B5%D0%B3%D0%B8%D0%B8/sid-meier-s-civilization-vi-test-gpu

    http://gamegpu.com/action-/-fps-/-tps/shadow-warrior-2-test-gpu

    nvidia to certain extend try to fix the issue they have with kepler with maxwell but they know that will not going to be enough when AMD keep directing more and more games development towards their hardware core strength with new console hardware win. that's why nvidia is back in the console business with Nintendo Switch.