Performance Results: LuxMark, SPECviewperf, Cinema4D and Blender
The latest version of LuxMark is based on an updated LuxRender 1.5 render engine, which specifically incorporates OpenCL optimizations that invalidate comparisons to previous versions of the benchmark.
We tested all three scenes available in the 64-bit benchmark: LuxBall HDR (with 217,000 triangles), Neumann TLM-102 SE (with 1,769,000 triangles), and Hotel Lobby, with 4,973,000 triangles).
Radeon VII is the first-place finisher in LuxMark's LuxBall HDR workload, beating the Turing-based Titan RTX and Volta-based Titan V thanks to a substantial memory bandwidth advantage. It doesn't fare quite as well in the subsequent tests, which are more compute-intensive.
With that said, Radeon VII also scores better than GeForce RTX 2080, its primary competition, in the Neumann TLM-201 SE and Hotel Lobby tests.
The most recent version of SPECviewperf employs traces from Autodesk 3ds Max, Dassault Systemes Catia, PTC Creo, Autodesk Maya, Autodesk Showcase, Siemens NX, and Dassault Systemes SolidWorks. Two additional tests, Energy and Medical, aren’t based on a specific application, but rather on datasets typical of those industries.
Oil and gas workloads tend to involve very large datasets, which play into Radeon VII's 16GB of HBM2 at 1 TB/s. The same goes for a certain medical applications. And in those tests, AMD's flagship is faster than GeForce RTX 2080.
Catia and NX, specifically, respond well to the professional driver optimizations that benefit Nvidia's Titan cards. AMD's Radeons are quite a bit slower in both benchmarks. However, the Radeon VII and Radeon RX Vega 64 make easy work of GeForce RTX 2080.
ProRender is a physically-based GPU render engine. Unlike Arion Render, which we tested in our Titan RTX review, it utilizes OpenCL instead of CUDA.
Per AMD's recommendation, we tested Radeon VII and Radeon RX Vega 64 using Blender v.2.79b. In order to get CUDA acceleration from GeForce RTX 2080, we had to use v.2.80.
Rendering the bmw27_GPU test file using our Core i7-8700K took 5:16, regardless of the graphics card we had installed. Switching over to GPU acceleration through OpenCL or CUDA brought those times down significantly. Although Radeon VII trails GeForce RTX 2080, it definitely improves upon Radeon RX Vega 64's performance.
MORE: Best Graphics Cards
MORE: Desktop GPU Performance Hierarchy Table
MORE: All Graphics Content
Don't get me wrong. It is still a better card than an RTX 2070. But it's performance doesn't justify RTX 2080 pricing. Based on current pricing on PCParticker of the 2070 and 2080. $600 USD would be a price better suited for it.
Compute is a different matter. Depending on your specific work requirements. You can get some great bang for your buck.
Still it would be nice if AMD could blow out the pricing in the GPU segment as it does in the CPU segment. Although their strategy may be more of an attack on the compute segment. Given the large amount of memory and FP64 performance.
Did you mean predecessor?
I know this has been mentioned, but do we have any hard data where we know this for certain?
I remember asking someone before, and they posted a link, but even that seemed to be a he-said-she-said kind of thing.
I do have to agree, though, overall, with a vague disappointment. Given its performance, value-wise, it seems this is worthwhile only if you really want at least two of the games in the bundle.
I hadn't thought about what AMD's motivation was, but the thought that even AMD was caught a little by surprise at Nvidia's somewhat arrogant pricing for the RTX 2070/2080/2080Ti, and "smelled blood" as it were, is somewhat plausible.
The Ryzen CPU however I am interested in, the 2700X is a good deal and waiting for the Ryzen 3e generation to appear and see what this baby can do compare to Intel high end . But no I own a GTX1070 il think I pass this whole RTX and Radeon VII generation, there is not so much to gain for the price at this time.
The price cut from ~$5,000 (vanilla MI50) to $700 doesn't hurt.
Some undervolting and -clocking should do wonders to noise and heat.
Time to sit back and wait for Navi...
I don't see how the 2060 is a compelling value story. Sure, it's more cost-efficient than its high-end cousins, but the 2060 offers very little in the way of a performance increase to people who were in the same price bracket previously (1070 owners), and it offers an enormous price and power premium to people who own 1060s.
Spent ~$370 almost 3 years ago for an Nvidia card? Well now you can plop down roughly the same money for about a 15% performance increase and a 2 GB loss of VRAM. That tech-journalists actually tout this as great progress mystifies me.
Unfortunately this newest release from AMD doesn't look like it presages significant price pressure to bring Nvidia back down to earth. Things might get better as AMD drives towards down towards the midrange segment, but who knows? Here's hoping.