Performance Results: Pro Visualization
Madrid-based RandomControl offers the Arion Render physically-based path tracing render engine and ArionFX, composed of HDR image processing algorithms. ArionBench is meant as a proxy for the former, measuring GPU and CPU performance through a light simulation in a 3D scene.
The benchmark package includes executables for testing available GPUs, CPUs, and a hybrid combination of the two.
This CUDA-accelerated workload runs best on Titan RTX, followed by GeForce RTX 2080 Ti. Titan V trails the gaming card by about 100 points in the hardware-only test, while Titan Xp finishes far behind the more modern boards.
The latest version of LuxMark is based on an updated LuxRender 1.5 render engine, which specifically incorporates OpenCL optimizations that invalidate comparisons to previous versions of the benchmark.
We tested all three scenes available in the 64-bit benchmark: LuxBall HDR (with 217,000 triangles), Neumann TLM-102 SE (with 1,769,000 triangles), and Hotel Lobby, with 4,973,000 triangles).
Turing and Volta GPUs trade blows depending on the scene you look at. Titan V scores a win in LuxBall and Hotel Lobby, while the two TU102-based boards score higher in Neumann TLM-102 SE. The main takeaway, however, seems to be that Titan Xp is limited to a fraction of the performance achieved by the newer cards.
ProRender is another physically-based GPU render engine. Unlike Arion Render, however, it utilizes OpenCL. It’s also biased, meaning the renderer’s output is based on estimations rather than pixel-by-pixel calculations. Arion Render is unbiased, performing calculations on every pixel and in turn taking longer.
In both of our test scenes, Titan RTX is faster than its Nvidia-sourced competition.
On-board memory doesn’t seem to be a factor, since GeForce RTX 2080 Ti easily beats Titan Xp with 1GB less capacity. It’s more likely that Turing/Volta’s compute performance, increased number of schedulers, on-die SRAM advantage, and higher memory bandwidth convey big gains over the older Pascal architecture.
The latest version of OTOY’s OctaneRender incorporates support for out-of-core geometry, meaning meshes and textures can be stored in system memory while the unbiased GPU renderer works at interactive speeds.
One of Titan RTX’s big selling points is its 24GB of GDDR6. Thus far, our benchmarks haven’t shown a need for that much on-board memory. However, the first test we ran in OctaneRender repeatedly crashed on Titan V and Titan Xp due to running out of memory. A simpler scene allowed us to create a valid comparison, but not before we got our first taste of capacity envy.
Just because we completed runs on the competing cards didn’t mean their outcomes made much sense, though. It’s plausible that GeForce RTX 2080 Ti’s 11GB put it at a disadvantage to Titan Xp’s 12GB, tipping the scale in favor of Pascal. However, Titan V shouldn’t have suffered such a high render time (and low memory utilization number) with just as much RAM. In bouncing ideas back and forth with Nvidia, we could only hypothesize an issue with Titan V’s HBM2 memory subsystem not playing nice with OctaneRender.
The most recent version of SPECviewperf employs traces from Autodesk 3ds Max, Dassault Systemes Catia, PTC Creo, Autodesk Maya, Autodesk Showcase, Siemens NX, and Dassault Systemes SolidWorks. Two additional tests, Energy and Medical, aren’t based on a specific application, but rather on datasets typical of those industries.
In some workloads, Nvidia’s DirectX driver allows GeForce RTX 2080 Ti to match or even exceed the performance of Titan V. But Catia and NX, specifically, respond well to the professional driver optimizations that benefit Titan cards. The GeForce even loses to the older Titan Xp in those workloads.
Across the board, Titan RTX beats the still-formidable Titan V.
Titan V scores a slight win in the Energy tests, but again succumbs to Titan RTX everywhere else once we step the resolution up to 3800x2120.
MORE: Best Graphics Cards
MORE: All Graphics Content