RTX-OPS: Trying to Make Sense of Performance
As modern graphics processors become more complex, integrating resources that perform dissimilar functions but still affect the overall performance picture, it becomes increasingly difficult to summarize their capabilities. We already use terms like fillrate to compare how many billions of pixels or texture elements a GPU can theoretically render to screen in a second. Memory bandwidth, processing power, primitive rates—the graphics world is full of peaks that become the basis for back-of-the-envelope calculations.
Well, with the addition of Tensor and RT cores to its Turing Streaming Multiprocessors, Nvidia found it necessary to devise a new metric that’d suitably encompass the capabilities of its INT32 and FP32 math pipelines, its RT cores, and the Tensor cores. Tom’s Hardware doesn’t plan to use the resulting “RTX-OPS” specification for any of its comparisons, but since Nvidia is citing it, we want to at least describe the equation’s composition.
The RTX-OPS model requires utilization of all resources, which is a bold and very future-looking assumption. After all, until games broadly adopt Turing’s ray tracing and deep learning capabilities, RT and Tensor cores sit idle. As they start coming online, though, Nvidia developed its own approximation of the processing involved in one frame rendered by a Turing-based GPU.
In the diagram above, Nvidia shows roughly 80% of the frame consumed by rendering and 20% going into AI. In the slice dedicated to shading, there’s a roughly 50/50 split between ray tracing and FP32 work. Drilling down even deeper into the CUDA cores, we already mentioned that Nvidia observed roughly 36 INT32 operations for every 100 FP32 instructions across a swathe of shader traces, yielding a reasonable idea of what happens in an “ideal” scene leveraging every functional unit.
So, given that…
FP32 compute = 4352 FP32 cores * 1635 MHz clock rate (GPU Boost rating) * 2 = 14.2 TFLOPS
RT core compute = 10 TFLOPS per gigaray, assuming GeForce GTX 1080 Ti (11.3 TFLOPS FP32 at 1582 MHz) can cast 1.1 billion rays using software emulation = ~100 TFLOPS on a GeForce RTX 2080 Ti capable of casting ~10 billion rays
INT32 instructions per second = 4352 INT32 cores * 1635 MHz clock rate (GPU Boost rating) * 2 = 14.2 TIPS
Tensor core compute = 544 Tensor cores * 1635 MHz clock rate (GPU Boost rating) * 64 floating-point FMA operations per clock * 2 = 113.8 FP16 Tensor TFLOPS
…we can walk Nvidia’s math backwards to see how it reached a 78 RTX-OPS specification for its GeForce RTX 2080 Ti Founders Edition card:
(14 TFLOPS [FP32] * 80%) + (14 TIPS [INT32] * 28% [~35 INT32 ops for every 100 FP32 ops, which take up 80% of the workload]) + (100 TFLOPS [ray tracing] * 40% [half of 80%]) + (114 TFLOPS [FP16 Tensor] * 20%) = 77.9
Again, there are a lot of assumptions made in this model, we see no way to use it for generational or competitive comparisons, and we don’t want to get in the habit of generalizing ratings across many different resources. At the same time, it’s clear that Nvidia wanted a way to represent performance holistically and we cannot fault the company for trying, particularly since it didn’t just add the capabilities of each subsystem but rather isolated their individual contributions to a frame.
MORE: Best Graphics Cards
MORE: Desktop GPU Performance Hierarchy Table
MORE: All Graphics Content