Starfield Perf Disparity Between AMD and Nvidia GPUs Analyzed
Starfield has received a lot of heat for its supposed lack of optimizations and abnormally poor performance on Nvidia and Intel GPUs. Right now, our initial Starfield benchmarks show the RDNA3-based RX 7000-series cards often punching well above their weight class, or alternatively Nvidia GPUs coming up short of expectations. Chips and Cheese ran some performance analyzers on Starfield to try and determine why the engine favors RDNA 3 GPUs. The analysis centers on cache hit rates, shader scheduling, and other factors — though at present it doesn't really provide a definitive answer as to why AMD's GPUs do better.
The Chips and Cheese article looks at several instances of Starfield's pixel and compute shaders and how they're processed inside AMD's RDNA 3 GPU cores (WGPs) versus Nvidia's RTX GPU cores (SMs). The outlet found that Starfield's pixel and compute shaders currently make better utilization of several aspects of AMD's RDNA 3 GPUs. These include RDNA 3's larger vector register files, superior thread tracking, and the use of an L0, L1, L2, and L3 cache design. Nvidia's RTX 40-series GPUs only have L1 and L2 caches, by comparison, and things are even worse on Nvidia's RTX 30- and 20-series GPUs that have significantly smaller L2 cache sizes.
All these architectural traits appear to help AMD's RDNA 3 graphics cards dominate Nvidia in early Starfield benchmarks. Nvidia's architecture doesn't keep pace with RDNA 3 in many of these cases, causing its GPUs to lag in performance as a result. However, as we've seen from dozens of other examples, there's plenty that can be done to equalize performance margins between Nvidia and AMD GPUs. Whether the developers will reach the expected levels of performance is another matter, and only time will give us that answer.
Take the above chart as an example. Chips and Cheese notes, "RDNA 3 enjoys very good vector utilization. Scalar ALUs do a good job of offloading computation, because the vector units would see pretty high load if they had to handle scalar operations too." However, they go on to state, "Ampere and Ada see good utilization, though not as good as AMD’s. Turing’s situation is mediocre. 26.6% issue utilization is significantly lower than what we see on the other tested GPUs."
And that's the crux of the issue. Is the lower GPU utilization seen on Nvidia simply an inherent design aspect of the architectures and the Starfield engine, or can it be fixed? Or perhaps more cynically: Will the engine be fixed? Bethesda has stated that it's currently working with AMD, Intel, and Nvidia driver teams to improve performance, which is as good a place to start as any. All code is not created equal, and some specific tuning was likely already done by AMD thanks to their partnership with Bethesda on the game. That's probably why the game runs better on AMD GPUs. Now Intel and Nvidia need to work on ways to do the same thing for their architectures.
As it stands now, we found the RX 7900 XTX matched and often exceeded the performance of Nvidia's RTX 4090, even though the latter has significantly more cores and processing power than AMD's flagship. Nvidia has already helped reduce the performance disparity by enabling Resizable BAR support a few days back, but that's only the bare beginnings of improving the situation. We also found that AMD's RDNA 2 RX 6000-series GPUs perform abnormally well compared to Nvidia GPUs, with the RX 6800 as an example matching or exceeding the performance of an RTX 4070. Chips and Cheese did not test any AMD RDNA 2 GPUs to attempt to determine why that might be.
What we do know is that Nvidia's GPUs currently perform worse than AMD's GPUs — not just with the latest RDNA 3 and Ada Lovelace, but going as far back as Pascal (GTX 1070 Ti) and Polaris (RX 590). Chips and Cheese concludes, "There’s no single explanation for RDNA 3's relative overperformance in Starfield. Higher occupancy and higher L2 bandwidth both play a role, as does RDNA 3's higher frontend clock. However, there's really nothing wrong with Nvidia's performance in this game, as some comments around the internet might suggest."
We would respectfully disagree. There is clearly a problem with Nvidia's performance right now, and Starfield's performance in general. Requiring an RX 6800 or RTX 4070 just to break 60 fps at 1080p ultra is not typical behavior for a game with this level of graphics fidelity. It might be the limitations of Nvidia's (and Intel's) architectures to some degree, but it's also a safe bet that there are changes and optimizations that can be made to improve Starfield performance, and we'll likely see those enhancements as the months and years tick by. What we can't say is where things will eventually end up — this is, after all, a Bethesda game.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Aaron Klotz is a contributing writer for Tom’s Hardware, covering news related to computer hardware such as CPUs, and graphics cards.