Skip to main content

PresentMon: Performance In DirectX, OpenGL, And Vulkan

Frames Per Second: Bars Or Curves?

Frames Per Second

In the end, time-based averages only tell you how many frames were rendered in a given second. There's no way for you to know how well-paced those frames were, or if a long pause interrupted the action at one point, negatively affecting the experience. After all, a one-second interval with lots of fast frames and a single frame that took 100 ms to display is going to "feel" less smooth than the same interval with slower, consistently-rendered frames, even if their averages appear similar.

The following two bar charts correspond to our enthusiast and mainstream PCs. Each includes minimum, average, and maximum frame rates. They're all rough indicators of performance, mostly useful for comparing many cards at a glance. But even then, all three figures are commonly maligned for over-simplifying the real-world gaming experience.

Charting frame rate over time shows instantaneous performance at any point in our benchmark run. These charts are more interesting than the simple bars, though they don't quantify a test's outcome as succinctly. Still, it's pretty obvious which configurations fare better in this type of comparison. 

For 108 seconds of our test run, we have 108 points on our X axis with the corresponding frame rate at each second on the Y axis. But we still know nothing about dropped frames or micro-stutter within each one-second interval. These are still rough averages.

Properly representing and interpreting the rendering times is so important because the two FPS indicators already discussed, whether they're bars or lines, tell us nothing about how a game actually "feels." Of course, we want to retain those averages, peaks, and floors. But it's important to add that sub-one-second dimension as well.

Frame Times And Rendered Frames

Presenting the render time of each individual frame as a line chart looks simple enough. However, it gets a lot more complicated when you start comparing cards at different performance levels, which generate different numbers of frames during a defined benchmark interval. For instance, in our DirectX 11 test on the enthusiast system, MSI's Radeon RX 480 Gaming X 8G creates 8090 frames. In the DirectX 12 benchmark, it outputs 8446 frames. Meanwhile, the GeForce GTX 1060 Gaming X 6G pushes 7362 and 7332 frames, respectively.

Ideally, we'd want to compare the render times of individual frames in order to better understand the graphical flow of each card. But we can't simply overlap variable-length recordings on a common horizontal axis like we do in the FPS graphs, which are based on 108 data points in our example.

For the two test systems, starting with the faster one, we end up with this:

On both graphs, the Y axes scale based on the render time results we measured. We do this to show as much chart detail as possible; simply keep it in mind to avoid comparing the left and right diagrams at a glance. Though it'd certainly be possible to extend the left graph's Y axis up to a 160-second maximum, doing so would sacrifice resolution.

So now we know what it looks like when you compare the frame-by-frame histories of two GPUs on a single graph's X axis, even when their outputs differ. Excel can't do this on its own though, so we use our software to create optimized curves of equal length for each card. But how do you get 8446 or 7362 individual data points onto an axis able to fit, say, 1000 values?

We preserve the benchmark run's peaks and values exactly as they'd appear in a graph with however many thousands of points were actually captured as losslessly as possible. Everything is interpolated cleanly, so that the visual representation matches exactly.


MORE: Best Graphics Cards


MORE: Desktop GPU Performance Hierarchy Table


MORE: All Graphics Content