Battlefield 1 Performance In DirectX 12: 29 Cards Tested

High-End PC, 1920x1080

Again, we have too many graphics cards to fit them all into one chart, so GeForces and Radeons are separated. We also know that nobody buys a GeForce GTX 1080 to game at 1920x1080, so we try to keep each resolution’s field relevant.

Low Quality Preset

A GeForce GTX 1070 hits the 200 FPS limiter set by default in Battlefield 1, and the rest of the field falls in behind. It looks like this causes some issues with the 1070's frame times, too. But even the slowest card we test, Nvidia’s old GeForce GTX 780, averages nearly 125 FPS using the Low preset.

Gone is much of the inconsistency we saw at the beginning of our runs on the FX-based PC. All of these cards start and finish strong. Perhaps our mainstream GPUs and their modest memory subsystems weren’t to blame, but rather the platform’s limited bandwidth from two channels of DDR3.

We only carry two cards over from the previous pages (GeForce GTX 970 and 1060 6GB), and indeed both were hugely constrained by AMD’s FX CPU. The 970 bags a 50% gain, while the 1060 6GB picks up almost 60% more performance by simply dropping into a Z170A-based motherboard with Intel’s Core i7-6700K installed.

The Radeon R9 390X also bangs against Battlefield’s 200 FPS ceiling, so there’s no point to adding Fiji-based cards to these charts.

Our selection from AMD’s portfolio remains more tightly grouped than the GeForces, bottoming out with the RX 470’s 157 FPS average. All of these cards have plenty of room for more demanding detail settings at 1920x1080.

The Radeon RX 470, R9 390, and RX 480 make a second appearance after first dropping into our FX-based PC. They too were clearly hamstrung by AMD’s previous-gen architecture. The RX 470 enjoys a 41% speed-up, RX 480 rises almost 49%, and the R9 390 jumps by 56%.

Between the Radeon and GeForce cards, it’s crazy to see just how much upgrading an aging platform can affect performance at 1920x1080—a resolution that the Steam Hardware Survey tells us accounts for 45%+ of respondents. It’ll be interesting to see if this holds up at all during the shift to 2560x1440 on the next page.

Medium Quality Preset

Stepping quality up one notch brings Nvidia’s GeForce GTX 1070 down from Battlefield’s artificial cap. Whereas the 1070 was ~20% faster than the 970 using Low details, it’s now 34% quicker.

Strangely enough, though, the 1070 is the only card in this field that demonstrates frame time spikes through our benchmark sequence. They register as small stutters, which show up on our unevenness metric.

The Radeon cards take a significant performance hit compared to our Low preset results. However, they deliver a smooth, consistent experience. And good news if you’re still gaming on a 2013-era R9 290/290X: the Hawaii GPU is good enough for 100 FPS and up at 1920x1080 on a fast platform.

High Quality Preset

The frame time spikes that plagued GeForce GTX 1070 using the Low and Medium presets disappear when we select High quality. In fact, applying a more demanding workload has a distinctly positive effect on the Pascal-based card’s consistency.

For what it’s worth, the 1070’s lead over its generational predecessor grows to ~37% as we continue moving the detail slider up.

Right down to the Radeon RX 470, AMD’s line-up performs smoothly through our test. All of the cards encounter a frame time spike around the run’s mid-point, but it remains below 30ms in all but one case.

Notably, the Radeon RX 480 8GB is slightly faster than GeForce GTX 1060 6GB, though at this point they’re only separated by about $10.

Ultra Quality Preset

The 1070’s frame time spikes are back, inexplicably, when we specify Ultra quality. They’re significant enough to appear as brief pauses in our unevenness index. And yet, the GTX 1070 posts chart-topping average and minimum frame rates.

A GeForce GTX 780 is fast enough for fairly smooth performance at 1920x1080…but only just, we’d say. Average and, more important, minimum frame rates go up from there, proving that high-end cards from three generations back are still usable at taxing detail settings, so long as you’re willing to compromise on resolution.

The same goes for AMD’s older Radeons, based on the Graphics Core Next architecture. Radeon R9 290X, Radeon R9 390, and Radeon RX 480 all outperform GeForce GTX 1060 6GB at Battlefield 1’s top quality preset. They generally deliver smooth frame times, too.

MORE: Best CPUs

MORE: Intel & AMD Processor Hierarchy

MORE: All CPU Content

This thread is closed for comments
47 comments
    Your comment
  • envy14tpe
    Wow. The amount of work in putting this together. Thanks, from all the BF1 gamers out there. You knocked my socks off, and are pushing me to upgrade my GPU.
  • computerguy72
    Nice article. Would have been interesting to see the 1080ti and the Ryzen 1800x mixed in there somewhere. I have a 7700k and a 980ti it would be good info to get some direction on where to take my hardware next. I'm sure other people might find that interesting too.
  • Achaios
    Good job, just remember that these "GPU showdowns" don't tell the whole story b/c cards are running at Stock, and there are GPU's that can get huge overclocks thus performing significantly better.

    Case in point: GTX 780TI

    The 780TI featured here runs at stock which was 875 MHz Base Clock and 928 MHz Boost Clock, whereas the 3rd party GPU's produced ran at 1150 MHz and boosted up to 1250-1300 MHz. We are talking about 30-35% more performance here for this card which you ain't seeing here at all.
  • xizel
    Great write up, just a shame you didnt use any i5 CPUS, i would of really liked to se how an i5 6600k competes with its 4 cores agains the HT i7s
  • Verrin
    Wow, impressive results from AMD here. You can really see that Radeon FineWine­™ tech in action.
  • And then you run in DX11 mode and it runs faster than DX12 across the board. Thanks for effort you put in this but rather pointless since DX12 has been nothing but pile of crap.
  • pokeman
    Why do my 680oc 2gb sli run this at 100hz 3440x1440? 2133 gskill, 4770k 4.2gz
  • NewbieGeek
    @XIZEL My i5 6600k @4.6ghz and rx 480 get 80-90 fps max settings on all 32 v 32 multiplayer maps with very few spikes either up or down.
  • ohim
    780Ti below a R9-290 3 years down the road ...
  • Jupiter-18
    Fascinating stuff! Love that you are still including the older models in your benchmarks, makes for great info for a budget gamer like myself! In fact, this may help me determine what goes in my budget build I'm working on right now, which I was going to have dual 290x (preferably 8gb if I can find them), but now might have something else.
  • dfg555
    It's funny how at 4K the Fury and Fury X are able to match 980 Ti speeds yet the game is utilizing way more than 4GB of VRAM. HBM doing its' wonders.

    Also what happened to Kepler, RIP 780 Ti vs 290X.
  • Achaios
    Like I wrote above, the GTX 780TI they have here is running a stock which was 875/928 Mhz. A third party GTX 780TI such as the Gigabyte GTX 780TI GHz Edition that boosts to 1240 MHz, scores 13540 3D Mark Firestrike Graphics score, which is just 20 marks less or so than the R9 390X at 3D mark Firestrike performance results, and significantly faster than the R9 290X, R9 470, R9 480 and GTX 1060 6GB.
    http://imgur.com/KIP0MRt 3D mark Firestrike results here: https://www.futuremark.com/hardware/gpu
  • sunny420
    Awesome work folks! The data!!
    The only thing I felt was missing was as Xizel mentioned. It would have been great to see an I5 included in the Scaling: CPU Core Count chart. All of the I7s perform similarly, with only one I3 outlier for data points. It would have been nice to see a middle-of-the-road offering in one of the I5s.
  • damric
    Should have kept Hyper Threading enabled on the i3 since the whole point of the i3's existence is Hyper Threading.

    1st gen GCN really pulled ahead of Kepler over the years.
  • atomicWAR
    I would have liked to see some actually i5s in the mix. While I get disabling hyper-threading emulates them to a degree, I can't tell you how many posts in the forums I have responded to with i5s claiming/or troubleshooting that ended up with a CPU bottleneck in this game, especially in multiplayer. Folks running good GPUs (everything from RX 480s to GTX 1080s) getting 100% CPU usage or close (again the worst of it was in multiplayer). Ultimately I feel like this article says in-directly 4C/4T is enough when every day posts in the forums say the opposite. While I know you could never get a fully accurate benchmark in multiplayer, I would like to see an article on core scaling in multiplayer all the same. It would have to be more about the reviewers impression of how smooth game play is but doing some benchmarks that have CPU utilization and frame variance/ frame rate would be useful in helping those with i5s (or any 4C/4T CPU) figure out if their experience is being hindered or is typical compared to other 4C/4T users. This article had a ton of info but I feel it only scratched the surface of what gamers are dealing with in real life.
  • Kawi6rr
    Wow a lot of work went into this very well done! Go AMD, I like how they get better with age lol. Looks like my next card will be the new 580's coming out.
  • thefiend1
    Does increasing the quality settings help the color banding in the gradients of the smoke, fog, etc? On XB1 the banding is horrible in some cases and im curious if that issue is the same on PC.
  • David_693
    Would have also liked to have seen the AMD 295x2 in the mix as well...
  • MaCk0y
    146991 said:
    I would have liked to see some actually i5s in the mix. While I get disabling hyper-threading emulates them to a degree, I can't tell you how many posts in the forums I have responded to with i5s claiming/or troubleshooting that ended up with a CPU bottleneck in this game, especially in multiplayer. Folks running good GPUs (everything from RX 480s to GTX 1080s) getting 100% CPU usage or close (again the worst of it was in multiplayer). Ultimately I feel like this article says in-directly 4C/4T is enough when every day posts in the forums say the opposite. While I know you could never get a fully accurate benchmark in multiplayer, I would like to see an article on core scaling in multiplayer all the same. It would have to be more about the reviewers impression of how smooth game play is but doing some benchmarks that have CPU utilization and frame variance/ frame rate would be useful in helping those with i5s (or any 4C/4T CPU) figure out if their experience is being hindered or is typical compared to other 4C/4T users. This article had a ton of info but I feel it only scratched the surface of what gamers are dealing with in real life.


    I agree. My Core i5-4690k at 4.8GHz is between 95-100% usage in multiplayer.
  • DerekA_C
    I get about 72% usage on my 4790k at 4.4ghz in multi 64 player server
  • IceMyth
    Just a question, why did you try to use Rx480 in cross fire since the price of 2 RX480 is the same price as 1 1080Ti, this would be interesting and i think this is the scenario that AMD used when they lunched their RX family.
  • cryoburner
    1328515 said:
    Case in point: GTX 780TI The 780TI featured here runs at stock which was 875 MHz Base Clock and 928 MHz Boost Clock, whereas the 3rd party GPU's produced ran at 1150 MHz and boosted up to 1250-1300 MHz. We are talking about 30-35% more performance here for this card which you ain't seeing here at all.
    1328515 said:
    Like I wrote above, the GTX 780TI they have here is running a stock which was 875/928 Mhz. A third party GTX 780TI such as the Gigabyte GTX 780TI GHz Edition that boosts to 1240 MHz, scores 13540 3D Mark Firestrike Graphics score, which is just 20 marks less or so than the R9 390X at 3D mark Firestrike performance results, and significantly faster than the R9 290X, R9 470, R9 480 and GTX 1060 6GB. http://imgur.com/KIP0MRt 3D mark Firestrike results here: https://www.futuremark.com/hardware/gpu


    The performance difference isn't nearly that great. I had a look at the GTX 780Ti "GHz Edition" reviews, and benchmarks showed it performing around 15% faster than a stock 780Ti when it wasn't CPU limited. 30% higher clocks does not necessarily equal 30% more performance. Assuming the cards used in these benchmarks were at stock clocks, then the best you could expect from the GHz Edition would be right around the GTX 970's performance level at anything above "low" settings.

    Also, it should be pointed out that most 780 Tis didn't run anywhere near those clocks. You can't take the highest-OCed version of a graphics card and imply that was the norm for third-party cards. And if we take into account overclocked versions of the other cards, then the overall standings probably wouldn't change much. The 780Ti likely just isn't able to handle DX12 as well as these newer cards, particularly AMD's.

    It might have been nice if this performance comparison also tested DX11 mode though, since I know Nvidia's cards took a performance hit for DX12 back at the game's launch. I was also a bit surprised to see how poorly Nvidia's 2GB cards fared here though, while AMD's seemed to handle the lack of VRAM more gracefully. The 2GB GTX 1050 dropped below 30fps for an extended length of time even at medium settings, and all of Nvidia's 2GB cards plummeted to single digits at anything higher than that. Meanwhile, the 2GB Radeons stayed above 30fps even at high settings. It kind of makes me wonder how these 3GB GTX 1060s will fare a year or so from now, especially when you consider that the RX 480s and even 470s all come equipped with at least 4GB.
  • falchard
    DX12 games tend to scale to more GPUs better than previous generations. However this may be a difference in early adopter developers unlocking everything.
  • renz496
    1048889 said:
    It's funny how at 4K the Fury and Fury X are able to match 980 Ti speeds yet the game is utilizing way more than 4GB of VRAM. HBM doing its' wonders. Also what happened to Kepler, RIP 780 Ti vs 290X.


    with Fury AMD need to make special optimization on the VRAM so the VRAM usage did not exceed the 4GB capacity. something similar can also be done on GDDR5 memory if you want to. i have GTX660 and GTX960 (both 2GB model). on the same graphical setting the 660 will use much less VRAM than GTX960. that's because GTX660 have weird memory configuration that nvidia for their part try to fit the memory on the first 1.5GB portion first. that's why AMD create HBCC with Vega so they no longer need to tweak the VRAM usage per game basis.

    as for what happen to 780ti vs 290X that's what happen when you win majority of the console hardware contract. but most often nvidia kepler still very competitive to it's respective Radeon counter part for tittles that is PC exclusive or coming out on PC first console second. take this for example:

    http://gamegpu.com/rts-/-%D1%81%D1%82%D1%80%D0%B0%D1%82%D0%B5%D0%B3%D0%B8%D0%B8/sid-meier-s-civilization-vi-test-gpu

    http://gamegpu.com/action-/-fps-/-tps/shadow-warrior-2-test-gpu

    nvidia to certain extend try to fix the issue they have with kepler with maxwell but they know that will not going to be enough when AMD keep directing more and more games development towards their hardware core strength with new console hardware win. that's why nvidia is back in the console business with Nintendo Switch.