Battlefield 1 Performance In DirectX 12: 29 Cards Tested

High-End PC, 3840x2160

Low Quality Preset

There aren’t as many cards capable of enjoyable performance at 3840x2160, so we combine the GeForce and Radeon products into one set of graphs.

Battlefield 1’s Low quality preset gives up a lot of the game’s great visuals, so we can’t imagine anyone would willingly combine high-end graphics and an expensive monitor, only to compromise detail for performance. It’s more likely that you’d dial down to 2560x1440 and maintain Medium or High quality. Nevertheless, our purpose here is establishing a baseline whereby all eight contenders serve up playable performance, and that’s exactly what we see before intensifying the workload.

Medium Quality Preset

The Radeon R9 Fury X and Fury both stumble as the benchmark begins, causing their low minimum frame rates. After recovering, though, the Fury X beats GeForce GTX 1070, while AMD’s vanilla Fury slots in above GeForce GTX Titan X (Maxwell) and 980 Ti.

Though all of the results might technically be considered usable, we wouldn’t be thrilled about playing at Medium details on the GTX 980 at frame rates in the mid-40s. QHD at the Ultra preset would look and feel much better.

High Quality Preset

Severe slow-downs at the beginning of our benchmark aren’t exclusive to Radeon cards. Here, the GeForce GTX 980 must sputter to life, causing a 13 FPS minimum frame rate. Regardless, by the time we select Battlefield 1’s High preset, most of these cards toe the line above and below 40 FPS. Only the GeForce GTX 1080 and Titan X (and the 1080 Ti, we assume) run smooth enough to keep most gamers happy.

Ultra Quality Preset

The most demanding settings we test at—3840x2160 using Ultra settings—are too severe for most single-GPU solutions. A Titan X or GeForce GTX 1080 Ti keeps you above 50 FPS through our benchmark run. The GeForce GTX 1080 maintains at least 40 FPS. Everything else falls into the 30s and below.

MORE: Best CPUs

MORE: Intel & AMD Processor Hierarchy

MORE: All CPU Content

This thread is closed for comments
47 comments
    Your comment
  • envy14tpe
    Wow. The amount of work in putting this together. Thanks, from all the BF1 gamers out there. You knocked my socks off, and are pushing me to upgrade my GPU.
  • computerguy72
    Nice article. Would have been interesting to see the 1080ti and the Ryzen 1800x mixed in there somewhere. I have a 7700k and a 980ti it would be good info to get some direction on where to take my hardware next. I'm sure other people might find that interesting too.
  • Achaios
    Good job, just remember that these "GPU showdowns" don't tell the whole story b/c cards are running at Stock, and there are GPU's that can get huge overclocks thus performing significantly better.

    Case in point: GTX 780TI

    The 780TI featured here runs at stock which was 875 MHz Base Clock and 928 MHz Boost Clock, whereas the 3rd party GPU's produced ran at 1150 MHz and boosted up to 1250-1300 MHz. We are talking about 30-35% more performance here for this card which you ain't seeing here at all.
  • xizel
    Great write up, just a shame you didnt use any i5 CPUS, i would of really liked to se how an i5 6600k competes with its 4 cores agains the HT i7s
  • Verrin
    Wow, impressive results from AMD here. You can really see that Radeon FineWine­™ tech in action.
  • And then you run in DX11 mode and it runs faster than DX12 across the board. Thanks for effort you put in this but rather pointless since DX12 has been nothing but pile of crap.
  • pokeman
    Why do my 680oc 2gb sli run this at 100hz 3440x1440? 2133 gskill, 4770k 4.2gz
  • NewbieGeek
    @XIZEL My i5 6600k @4.6ghz and rx 480 get 80-90 fps max settings on all 32 v 32 multiplayer maps with very few spikes either up or down.
  • ohim
    780Ti below a R9-290 3 years down the road ...
  • Jupiter-18
    Fascinating stuff! Love that you are still including the older models in your benchmarks, makes for great info for a budget gamer like myself! In fact, this may help me determine what goes in my budget build I'm working on right now, which I was going to have dual 290x (preferably 8gb if I can find them), but now might have something else.
  • dfg555
    It's funny how at 4K the Fury and Fury X are able to match 980 Ti speeds yet the game is utilizing way more than 4GB of VRAM. HBM doing its' wonders.

    Also what happened to Kepler, RIP 780 Ti vs 290X.
  • Achaios
    Like I wrote above, the GTX 780TI they have here is running a stock which was 875/928 Mhz. A third party GTX 780TI such as the Gigabyte GTX 780TI GHz Edition that boosts to 1240 MHz, scores 13540 3D Mark Firestrike Graphics score, which is just 20 marks less or so than the R9 390X at 3D mark Firestrike performance results, and significantly faster than the R9 290X, R9 470, R9 480 and GTX 1060 6GB.
    http://imgur.com/KIP0MRt 3D mark Firestrike results here: https://www.futuremark.com/hardware/gpu
  • sunny420
    Awesome work folks! The data!!
    The only thing I felt was missing was as Xizel mentioned. It would have been great to see an I5 included in the Scaling: CPU Core Count chart. All of the I7s perform similarly, with only one I3 outlier for data points. It would have been nice to see a middle-of-the-road offering in one of the I5s.
  • damric
    Should have kept Hyper Threading enabled on the i3 since the whole point of the i3's existence is Hyper Threading.

    1st gen GCN really pulled ahead of Kepler over the years.
  • atomicWAR
    I would have liked to see some actually i5s in the mix. While I get disabling hyper-threading emulates them to a degree, I can't tell you how many posts in the forums I have responded to with i5s claiming/or troubleshooting that ended up with a CPU bottleneck in this game, especially in multiplayer. Folks running good GPUs (everything from RX 480s to GTX 1080s) getting 100% CPU usage or close (again the worst of it was in multiplayer). Ultimately I feel like this article says in-directly 4C/4T is enough when every day posts in the forums say the opposite. While I know you could never get a fully accurate benchmark in multiplayer, I would like to see an article on core scaling in multiplayer all the same. It would have to be more about the reviewers impression of how smooth game play is but doing some benchmarks that have CPU utilization and frame variance/ frame rate would be useful in helping those with i5s (or any 4C/4T CPU) figure out if their experience is being hindered or is typical compared to other 4C/4T users. This article had a ton of info but I feel it only scratched the surface of what gamers are dealing with in real life.
  • Kawi6rr
    Wow a lot of work went into this very well done! Go AMD, I like how they get better with age lol. Looks like my next card will be the new 580's coming out.
  • thefiend1
    Does increasing the quality settings help the color banding in the gradients of the smoke, fog, etc? On XB1 the banding is horrible in some cases and im curious if that issue is the same on PC.
  • David_693
    Would have also liked to have seen the AMD 295x2 in the mix as well...
  • MaCk0y
    146991 said:
    I would have liked to see some actually i5s in the mix. While I get disabling hyper-threading emulates them to a degree, I can't tell you how many posts in the forums I have responded to with i5s claiming/or troubleshooting that ended up with a CPU bottleneck in this game, especially in multiplayer. Folks running good GPUs (everything from RX 480s to GTX 1080s) getting 100% CPU usage or close (again the worst of it was in multiplayer). Ultimately I feel like this article says in-directly 4C/4T is enough when every day posts in the forums say the opposite. While I know you could never get a fully accurate benchmark in multiplayer, I would like to see an article on core scaling in multiplayer all the same. It would have to be more about the reviewers impression of how smooth game play is but doing some benchmarks that have CPU utilization and frame variance/ frame rate would be useful in helping those with i5s (or any 4C/4T CPU) figure out if their experience is being hindered or is typical compared to other 4C/4T users. This article had a ton of info but I feel it only scratched the surface of what gamers are dealing with in real life.


    I agree. My Core i5-4690k at 4.8GHz is between 95-100% usage in multiplayer.
  • DerekA_C
    I get about 72% usage on my 4790k at 4.4ghz in multi 64 player server
  • IceMyth
    Just a question, why did you try to use Rx480 in cross fire since the price of 2 RX480 is the same price as 1 1080Ti, this would be interesting and i think this is the scenario that AMD used when they lunched their RX family.
  • cryoburner
    1328515 said:
    Case in point: GTX 780TI The 780TI featured here runs at stock which was 875 MHz Base Clock and 928 MHz Boost Clock, whereas the 3rd party GPU's produced ran at 1150 MHz and boosted up to 1250-1300 MHz. We are talking about 30-35% more performance here for this card which you ain't seeing here at all.
    1328515 said:
    Like I wrote above, the GTX 780TI they have here is running a stock which was 875/928 Mhz. A third party GTX 780TI such as the Gigabyte GTX 780TI GHz Edition that boosts to 1240 MHz, scores 13540 3D Mark Firestrike Graphics score, which is just 20 marks less or so than the R9 390X at 3D mark Firestrike performance results, and significantly faster than the R9 290X, R9 470, R9 480 and GTX 1060 6GB. http://imgur.com/KIP0MRt 3D mark Firestrike results here: https://www.futuremark.com/hardware/gpu


    The performance difference isn't nearly that great. I had a look at the GTX 780Ti "GHz Edition" reviews, and benchmarks showed it performing around 15% faster than a stock 780Ti when it wasn't CPU limited. 30% higher clocks does not necessarily equal 30% more performance. Assuming the cards used in these benchmarks were at stock clocks, then the best you could expect from the GHz Edition would be right around the GTX 970's performance level at anything above "low" settings.

    Also, it should be pointed out that most 780 Tis didn't run anywhere near those clocks. You can't take the highest-OCed version of a graphics card and imply that was the norm for third-party cards. And if we take into account overclocked versions of the other cards, then the overall standings probably wouldn't change much. The 780Ti likely just isn't able to handle DX12 as well as these newer cards, particularly AMD's.

    It might have been nice if this performance comparison also tested DX11 mode though, since I know Nvidia's cards took a performance hit for DX12 back at the game's launch. I was also a bit surprised to see how poorly Nvidia's 2GB cards fared here though, while AMD's seemed to handle the lack of VRAM more gracefully. The 2GB GTX 1050 dropped below 30fps for an extended length of time even at medium settings, and all of Nvidia's 2GB cards plummeted to single digits at anything higher than that. Meanwhile, the 2GB Radeons stayed above 30fps even at high settings. It kind of makes me wonder how these 3GB GTX 1060s will fare a year or so from now, especially when you consider that the RX 480s and even 470s all come equipped with at least 4GB.
  • falchard
    DX12 games tend to scale to more GPUs better than previous generations. However this may be a difference in early adopter developers unlocking everything.
  • renz496
    1048889 said:
    It's funny how at 4K the Fury and Fury X are able to match 980 Ti speeds yet the game is utilizing way more than 4GB of VRAM. HBM doing its' wonders. Also what happened to Kepler, RIP 780 Ti vs 290X.


    with Fury AMD need to make special optimization on the VRAM so the VRAM usage did not exceed the 4GB capacity. something similar can also be done on GDDR5 memory if you want to. i have GTX660 and GTX960 (both 2GB model). on the same graphical setting the 660 will use much less VRAM than GTX960. that's because GTX660 have weird memory configuration that nvidia for their part try to fit the memory on the first 1.5GB portion first. that's why AMD create HBCC with Vega so they no longer need to tweak the VRAM usage per game basis.

    as for what happen to 780ti vs 290X that's what happen when you win majority of the console hardware contract. but most often nvidia kepler still very competitive to it's respective Radeon counter part for tittles that is PC exclusive or coming out on PC first console second. take this for example:

    http://gamegpu.com/rts-/-%D1%81%D1%82%D1%80%D0%B0%D1%82%D0%B5%D0%B3%D0%B8%D0%B8/sid-meier-s-civilization-vi-test-gpu

    http://gamegpu.com/action-/-fps-/-tps/shadow-warrior-2-test-gpu

    nvidia to certain extend try to fix the issue they have with kepler with maxwell but they know that will not going to be enough when AMD keep directing more and more games development towards their hardware core strength with new console hardware win. that's why nvidia is back in the console business with Nintendo Switch.