Our benchmark analysis makes it seem like we’re dealing with a modern game challenging the latest graphics cards. There’s just one GPU able to average more than 60 FPS at 3840x2160? Seriously?
Yeah, that’s Crysis for you. Talk about a load of cool data, though.
On the AMD side, it was interesting to track the evolution from TeraScale 1 (Radeon HD 3870 and 4870) to TeraScale 2 (Radeon HD 5870), TeraScale 3 (Radeon HD 6970), and ultimately the various iterations of Graphics Core Next. Specifically, the jump from a VLIW-based architecture to a scalar SIMT one showed through in every one of our benchmark charts. AMD’s Southern Islands ISA whitepaper from 2012 made it clear that GCN set forth to improve resource utilization, calling out stable and predictable performance in particular. The scaling we observed from Radeon HD 7970 and up bears this out.
Nvidia’s architectural evolution appears better-paced. From Tesla (GeForce 8, 9, and 200) to Fermi (GeForce 400 and 500), Kepler (GeForce 600 and 700), Maxwell (GeForce 900), and Pascal (GeForce 10), the gains are fairly consistent. Further, while high-end AMD and Nvidia cards are similarly bottlenecked at 1920x1080, the GeForce boards enjoy a ~10%-higher ceiling than the Radeons. Whether this is due to Crytek’s CryEngine 2, a lack of driver optimizations for 10-year-old games, or some other platform constraint isn’t clear.
We don’t often get the opportunity to chart several generations of graphics flagships against each other, so we came up with the idea to plot GPU transistor count over frame rate.
By tracking from left to right (frame rate), it’s easy to compare competing generations of graphics hardware. For instance, we can see how Radeon HD 3870 landed behind GeForce 8800 GTX (true to what we observed 10 years ago). Similarly, Radeon HD 4870 debuted at a disadvantage to GeForce GTX 280. But it used a less complex processor, too. In 2010, Nvidia introduced GeForce GTX 480 with performance that actually trailed Radeon HD 5870—doubly problematic since the 250W card’s GF100 GPU incorporated ~39% more transistors. Nvidia ironed out its issues with GeForce GTX 580, and AMD’s answer back, Radeon HD 6970, underwhelmed. The following two generations saw AMD and Nvidia trading blows. More recently, in 2015, AMD’s Radeon R9 Fury X came pretty close to matching GeForce GTX 980 Ti’s performance in our launch story. But older games tend to favor Nvidia’s architecture, which is why you see GM200 and its eight billion transistors so far ahead of the more complex Fiji chip. It’s the last data point that hurts most, though: Vega 64 adds a ton of transistors, but lands quite a way behind GeForce GTX 1080 Ti.
Assigning a launch price to each card yields a very different picture when we plot them over performance. In this context, Radeon RX Vega 64 doesn’t look all that bad in Crysis. And while you cannot find Radeon RX 580s for the $200 AMD originally advertised, it’s kind of cool to see Polaris serving up better performance than Radeon R9 290X for half the price just four years later.
We hope you enjoyed our little jaunt back in time with a game that, 10 years later, still looks amazing as it hammers modern graphics cards at high resolutions. If you still have a copy of Crysis it may be time to dust it off and take this title for another spin.
MORE: Best Graphics Cards
MORE: Desktop GPU Performance Hierarchy Table
MORE: All Graphics Content
was it just poorly optimized?
It was a product of the times where developers were still trying to push the envelope for cutting edge graphical techniques.. Pretty cool