Sign in with
Sign up | Sign in
Workstation Graphics: 19 Cards Tested In SPECviewperf 12
By ,
1. Introducing Our Benchmark System

SPECviewperf 11, introduced back in 2010, has been showing its age for a while. It wasn't really giving us a realistic-looking picture of modern workstation graphics hardware and driver performance anymore. The applications composing it were just too old. Moreover, AMD and Nvidia were thoroughly optimizing for the specific workloads, throwing off the suite's value.

So, the Standard Performance Evaluation Corporation (SPEC) chose to step up its game with a much-needed update. After all, SPEC’s mission is to create relevant benchmarks that closely adhere to current industry standards.

AMD and Nvidia are both members of SPEC, allowing them to exert some influence over the new collection of tests. The idea is that no company gets an unfair advantage. We'll see how that works out in practice, though.

Update: 3/17/2014

We added benchmark results for the Quadro K6000, which naturally excels in many of this suite's sub-tests. Bear in mind that Nvidia's flagship is a purpose-built board, though, selling for $5000 on Newegg. Unfortunately, SPECviewperf doesn't include any general-purpose compute workloads, which is where the Quadro K6000 would undoubtedly excel most.

We wanted to run tests using SPECviewperf 12 as quickly as possible in order to provide a baseline look at workstation-class graphics performance, before drivers start getting optimized specifically for the test's various workloads (similar to what happened with SPECviewperf 11). To that end, it's also important for us to gauge how relevant the performance of SPECviewperf 12 is compared to the software it claims to represent.

Important Preamble:

SPECviewperf 12 is a demanding benchmark, targeting upper-middle and high-end workstation-class graphics cards. In tests that employ extremely complex models or workloads with immense memory requirements, the lower-end boards are at a disadvantage. Consequently, the results for those entry-level products need to be considered in relative terms; they're simply not meant to handle tasks like this.

Benchmark System

A carefully-picked test system is designed to facilitate analysis of CPU scaling based on cores, threads, and clock rates. For most of the benchmarks, the processor is overclocked to prevent platform-limited situations. However, I also have a complete page dedicated to processor-oriented testing for a more complete performance picture.

CPU and CoolerIntel Core i7 3770K (Ivy Bridge), Overclocked to 4.5 GHz
Corsair H100i Compact Water Cooler (Gelid GC Extreme)
Motherboard
Gigabyte G1. Sniper 3
RAM
32 GB (4 x 8 GB) Corsair Dominator Platinum DDR3-2133
SSD
2 x Corsair Neutron 480 GB
Power Supply
Corsair AX1200i
Operating System
Windows 7 x64 Ultimate SP1
Drivers
AMD FirePro 13.251.1
Nvidia Quadro 332.21
Other Equipment
Microcool Banchetto 101
HAMEG HMO 1024 Four-Channel Digital Memory Oscilloscope
HAMEG HZO50 (1 mA-30 A, 100 kHz DC, Resolution 1 mA)
HAMEG HMC 8012
HAMEG HZ154 (1:1, 1:10), Assorted Adapters

Three Gaming Cards (For Comparison, Of Course)

Admittedly, it's usually pointless to throw gaming-oriented graphics cards into a round-up of professional products. Software drivers are such a big part of what makes a FirePro or Quadro card distinct, that we know the Radeons and GeForces just won't fare as well. Then again, it's still important to know how desktop boards are represented in performance and image quality comparisons. Are there certain applications that don't necessitate workstation-class hardware? That's what we want to know. So, we're throwing in three gaming cards as well. They'll be the gray bars in the benchmark results graphs.

Let’s jump right in with the first of eight benchmark sections.

2. CATIA V6 R2012

CATIA V6 R2012

This benchmark uses CATIA V6 R2012, by Dassault Systèmes with several sample projects. The individual tests leverage model sizes between 5.1 to 21 million vertices. The viewset includes a large number of output options, such as wireframe, anti-aliasing, shading, shading with edges, depth of field, and ambient occlusion.

The following table shows how the 16 individual metrics are weighted, using AMD's FirePro W7000 as an example.

Benchmarks
Weight in %FPS
Depth Of Field
2.5042.58
Pencil
2.5045.11
SSAO
9.0056.30
Depth Of Field
2.5052.72
Edges
9.0085.96
Pencil
2.5056.71
Shaded
9.0075.56
Shaded + Edges
9.0072.37
Shaded + SSAO
9.0050.98
Shaded
9.0036.72
Shaded + SSAO9.0059.31
Shaded + SSAO + Reflection9.0024.19
Shaded + SSAO
9.0030.36
Shaded + SSAO + Edges
9.0026.63
Weighted Geometric Mean = 47.57
3. Results: CATIA V6 R2012

The Hawaii-based Radeon R9 290X's strong performance has us really looking forward to an upcoming FirePro product built on the same GPU, even if Nvidia's Quadro K6000 is clearly faster. Don't expect to substitute a Radeon into your professional workflow, though. AMD's Catalyst drivers aren't certified for any of these applications, so you'd be missing some of the workstation card's features. Then again, for what it's worth, the Catalyst package works a lot better in CATIA than Nvidia's GeForce driver.

Moving on to the benchmark results...

The FirePro W5000 and Quadro K2000 (as well as the FirePro W7000 and Quadro K4000) are on about even footing when it comes to price. The same can’t be said for their respective performance, though. AMD's FirePro W5000 manages to slide past Nvidia's Quadro K4000, which is almost twice as expensive. Similarly, the FirePro W7000 is only defeated by the Quadro K5000, which costs more than two times as much.

4. Creo 2

Our second benchmark is based on the Creo 2 design software from PTC, and again uses a handful of sample projects. The individual workloads manipulate models with between 20 and 48 million vertices. The viewset includes a number of output options supported by the application, such as wireframe, anti-aliasing, shading, shading with edges, and shaded reflections.

The following table shows how the 13 individual metrics are weighted (using the FirePro W7000 as our example).

BenchmarksWeight in %
FPS
Shaded
5.0073.23
Wireframe + AA x8
10.0052.51
Shaded + Edges
10.0055.51
Hidden
5.0010.05
Reflection
15.0027.73
Shaded
5.0017.03
Wireframe + No Hidden + AA x8
10.0063.70
Shaded + AA x8
5.0052.18
Shaded
5.0044.98
Shaded + Edges
10.0031.79
Hidden
5.003.50
Shaded + AA x8
5.0049.46
Shaded + Edges HQ

10.0028.42
Weighted Geometric Mean = 33.44
5. Results: Creo 2

The huge price/performance ratio gaps aren't as pronounced in Creo 2. The smaller FirePro W5000 outperforms its direct competition from Nvidia by almost 28 percent, whereas the FirePro W7000 and the similarly-priced Quadro K4000 post fairly similar numbers. As you can see, the gaming-oriented cards don't do very well at all.

6. Energy

This benchmark simulates a typical volume rendering application, which is used for geophysical surveys (think seismology, along with oil and natural gas exploration) and medical imaging. During the surveys, 2D images are combined to form volumetric representations, creating 2D and 3D views that can further analyzed and evaluated.

The energy-01 viewset takes advantage of hardware support for 3D textures and the associated trilinear interpolation, which in turn depends on a lot of fast graphics memory. In fact, there's a large-res test that employs a 3.2 GB dataset. Cards with less than 4 GB of RAM can't complete it. This explains why some of our lower-end boards perform so badly.

In the following table, you can see how the seven workloads are weighted. AMD's FirePro W7000 once again serves as our example.

BenchmarksWeight in %FPS
Test 114.0010.13
Test 2
14.009.80
Test 3
14.005.03
Test 4
14.002.55
Test 5
14.002.95
Test 6
15.000.54
Test 7
15.000.55
Weighted Geometric Mean = 2.64
7. Results: Energy

The GeForce and Radeon cards are only included for comparison purposes; but the Radeon R9 290X sure looks to be a strong performer. More interesting is that the cards with lots of graphics memory excel. This is expected, since boards with less than 4 GB automatically fail two of the seven sub-tests.

Overall, AMD's FirePro cards lead by a substantial margin. The W7000 competes with Nvidia's much more costly Quadro K5000 (instead of the Quadro K4000 closer to its price class). Meanwhile, the FirePro W5000 dominates the Quadro K2000.

This specific benchmark is more or less synthetic. But it’s still based on very realistic tasks that geologists, doctors, and engineers perform out in the field. Consequently, it does provide a pretty good picture of how a given workstation-oriented card scales based on its hardware and configuration.

8. Maya 2013

Maya 2013 is another popular application included in SPECviewperf 12. Viewport 2.0 is purposely excluded, since it employs DirectX. The model used for testing is made up of 727,500 vertices, and includes options for shading, ambient occlusion, multi-sample anti-aliasing, and transparency.

Once again, the table below shows how the seven individual benchmarks are weighted, with the FirePro W7000 serving as our example.

BenchmarksWeight in %FPS
Shaded18.0059.37
Shaded + SSAO
16.0039.25
Shaded + SSAO + MSAA
16.0038.91
Shaded + SSAO + MSAA + FPRT
16.0034.32
Shaded + SSAO + MSAA + Transparent-Weight
16.0036.34
Shaded + Wireframe
18.0080.16
Weighted Geometric Mean = 46.42
9. Results: Maya 2013

When it comes to gaming-oriented graphics cards, Nvidia's GeForce dominates the Radeon. The opposite is true when we isolate the workstation products, though. AMD's FirePro beats the more expensive Quadro K5000 again, and the FirePro W5000 leaves Nvidia's Quadro K2000 in the dust. Of course, none of the other professional boards can touch Nvidia's Quadro K6000, though at $5000, it's as expensive as it is rare.

In theory, you could probably get by with a GeForce or Radeon card in Maya 2013, so long as the potential for lower color depth doesn't bother you. If you're willing to sacrifice SSAO and MSAA, there's also the option of falling back to an older workstation-class card (unless you're working with extremely large models).

10. Medical

As with the Energy viewset, which covered geophysical surveys and imaging, SPECviewperf 12 uses a synthetic suite to represent the medical field, making use of functionality that is often used for this kind of texture-based volume rendering. Two-dimensional images, created through the use of computer tomography (CT) or magnetic resonance imaging (MRI), are combined into a 3D representation.

The direct volume rendering is achieved by lining up the image slices in parallel. This is done based on texture coordinates, which are specified at every single vertex. They consist of the location in the 3D space (x, y, and z) and also define the alignment and scaling of the texture on the polygon via an object. Next, the values needed for the actual display are calculated based on the texture coordinates. This is called compositing. The entire volume can be thought of as a large number of voxels, or volume pixels, which contain opacity and color on top of the texture information.

Volume ray casting is used to calculate the actual image from the voxels. The present benchmark has two parts. The “4D Heart Data Set” contains several 3D objects, and the “Stag Beetle” places large demands on memory.

Weighting for the 10 individual tests is found in a handy table below. Not surprisingly, AMD's FirePro W7000 serves as our example again.

BenchmarksWeight in %FPS
4D Heart Data Set - Test 110.0072.19
4D Heart Data Set - Test 2
10.0074.60
4D Heart Data Set - Test 310.0050.06
4D Heart Data Set - Test 4
10.0019.34
4D Heart Data Set - Test 5
10.0038.76
Stag Beetle - Test 610.0020.69
Stag Beetle - Test 710.0018.07
Stag Beetle - Test 8
10.009.21
Stag Beetle - Test 910.003.27
Stag Beetle - Test 1010.003.79
Weighted Geometric Mean = 19.66
11. Results: Medical

The gaming cards do really well (only the Quadro K6000 is faster), but we have to set them aside; nobody's going to use a consumer-oriented product in the medical field.

With those out of the way, we're left with essentially the same picture painted by Energy, the other synthetic volume rendering benchmark. Not only does AMD's FirePro W7000 dominate the Quadro K5000 in yet another suite, but the FirePro W5000 manages to beat the Quadro K4000 by a good margin and is almost twice as fast as the similarly-priced Quadro K2000.

Just remember that this is a synthetic collection of tests, though. It puts an emphasis on memory capacity and performance. While representative of the workloads encountered in the medical field, SPEC's benchmark primarily provides a good summary of how these cards might fare in real-world applications.

12. Showcase 2013

This is the first DirectX-based benchmark, complements of Autodesk (though a number of smaller ISVs are also moving their professional applications to DirectX as well). It uses a model with eight million vertices, features shading, projected shadows, and self-shadows.

There are four sub-tests in the suite, each worth 25% of the total score. The frame rate on the right comes from AMD's FirePro W7000.

BenchmarksWeight in %FPS
Ambient Shadows
25.0035.88
Both Shadows
25.0033.38
No Shadows
25.0037.34
Shadows
25.0034.25
Weighted Geometric Mean = 35.18
13. Results: Showcase 2013

The AMD Radeon and FirePro graphics cards run circles around Nvidia's comparable GeForce and Quadro cards. The FirePro cards, specifically, live up to their theoretical performance, whereas the Quadros (except for the first-place K6000)  benefit to a lesser extent from optimized drivers. 

14. Siemens NX 8.0

This OpenGL-based benchmark centers on Siemens PLM’s NX 8.0. The models getting worked on contain between 7.15 to 8.45 million vertices, and the rendering options include wireframe, anti-aliasing, shading, shading with edges, and studio mode. As with the most other workstation-class apps, NX 8.0 relies on certified hardware and optimized drivers.

How the 10 benchmarks, some of which feature very different graphics options, are weighted is illustrated in the table below. No surprise, the FirePro W7000 gives us our representative performance.

BenchmarksWeight in %FPS
Advanced Studio + AA
7.5023.79
Shaded + AA
10.0066.17
Shaded + Edges + AA
20.0039.54
Studio + AA
5.0029.09
Wireframe
7.50151.19
Advanced Studio
7.5048.58
Shaded
10.0082.10
Shaded + Edges
20.0050.86
Studio
5.0042.40
Wireframe
7.50231.83
Weighted Geometric Mean = 57.45
15. Results: Siemens NX 8.0

More so than any test run thus far, this one shows what happens when you use a gaming-oriented graphics card without optimized drivers in a professional piece of software. FirePro and Quadro cards cost more, but you can see why someone earning a living doing work in NX 8.0 would shell out the extra money for workstation hardware.

It's not just that paying more for a pro card gets you higher-quality features and better support; you also get drivers specifically optimized and validated for compatibility in titles like this one.

So, while the Radeon R9 290X and GeForce GTX 780 Ti fall to the bottom of our chart, AMD's FirePro W7000 dominates the Quadro K5000, which is in a higher price category. Nvidia's Quadro K4000, which should have gone up against the W7000 based on pricing, is beaten by the much less expensive FirePro W5000. The Quadro K6000 makes up for some of the embarrassment by laying down a command performance.

16. SolidWorks 2013

The last benchmark is a classic: SolidWorks 2013 by Dassault Systèmes. The models at the center of this suite's workloads have 2.1 to 21 million vertices. Each metric is based on the software's many features, including shaded mode, shaded-with-edges mode, ambient occlusion, normal shading, and environment cubemaps. This is a bit of a departure from the SPECapc test, as it doesn’t implement the CPU benchmark and uses fewer models (though it does add a workload with parallax effects).

One last time, how the individual benchmarks are weighted is listed in the table below, along with performance figures from AMD's FirePro W7000.

BenchmarksWeight in %FPS
Shaded
8.0057.11
Plax Plugin (Parallax Effects)
8.0045.18
SSAO
10.0044.98
Shaded + Edges
10.0044.18
Wireframe
8.0046.38
Shaded + Edges

10.0088.02
Shaded
8.00100.02
Shaded + Edges
10.0045.85
AO
10.0053.24
Shaded
8.00112.08
Shaded + Edges
10.00109.22
Weighted Geometric Mean = 62.67
17. Results: SolidWorks 2013

AMD's FirePro cards tend to be strong in SPECapc 2013 for SolidWorks. But the modifications made to SPECviewperf 12, coupled with Nvidia's recent Quadro driver update, change the game. The Quadro K5000 pulls ahead of its competition for the time, trailing only to Nvidia's own Quadro K6000. And speaking of, that card proves to be an exceptional piece of hardware in yet another test, even though its price tag exceeds what most professionals are willing to pay. 

Still, the FirePro W5000 inches out Nvidia's more expensive Quadro K4000. Price would suggest it does battle against the Quadro K2000, which simply cannot compete.

18. CPU Scaling

Scaling with Cores, Threads, and CPU Clock Frequency

Much has been made of SPECviewperf 12’s platform independence by the parties involved in its design. Instead, the benchmark's focus is decidedly on GPU performance. Still, running software on different systems never yields the exact same results, making it harder to relate our test results to all of the workstations out there requiring a new professional graphics card.

The critics who say our reference machine's 4.5 GHz Core i7 isn't representative of a real workstation might have a point. But using the aggressively-overclocked CPU was necessary to assure every score we generated was directly attributable to the cards we were benchmarking. That's a particularly important point when you're talking about similar-performing boards with big price tags. A slower CPU can quickly become a bottleneck, pushing small differences into the margin of error range.

Nevertheless, we still thought it'd be prudent to gauge the real effect of platform performance. To keep the exercise reasonable, we compared two cores and two threads to four cores and eight threads using 3.0, 3.5, 4.0, and 4.5 GHz frequencies. AMD's FirePro W7000 remained a constant, giving us the rendering power of an upper-mid-range graphics card priced under $1000. Even though high-end GPUs become processor-limited most quickly, the benchmark results can still get interesting in some applications, even with the board we used.

Both Creo 2 and Maya 2013 stand out from our first set of numbers. The two applications scale noticeably based on clock rate. Core and thread count matter less in Creo 2 at a given frequency, but Maya 2013 wants to run on a quad-core machine. On the other hand, CATIA and energy-01, the first of two volume rendering benchmarks, aren’t overly sensitive to CPU performance. Still, we can see that they don't perform quite as well with only two cores, two threads, and a 3 GHz clock rate.

In the second set of our scaling results, only SolidWorks responds to CPU frequency. Core and thread count don't make a difference. All of the other tests (medical-01, Showcase, and NX 8.0) are truly independent of platform performance, as revealed by their extremely uniform results.

19. Image Quality And Desktop Drivers

Sharp Edges vs. Anti-Aliasing

Nvidia and AMD build their OpenGL drivers using unique strategies, each with its own pros and cons. The resulting visual differences aren’t solely a result of hardware architecture, since they also show up on boards based on the older VLIW4 and Fermi GPUs.

Meanwhile, there are hardly any differences in the way these competitors output DirectX content, aside from somewhat darker shadows on the FirePro cards.

Let’s compare two scenes from Maya that illustrate both companies' philosophies well. These pictures diverge in the same way we’re used to seeing from older titles like LightWave.

Maya 2013: Shaded

Maya 2013: Shaded + SSAO + MSAA

Regardless of whether MSAA is turned on or not, the FirePro workstation graphics cards produce sharper edges and some additional detail (z-buffer). Then again, transitions aren’t as smooth as those produced by Nvidia's Quadro cards, instead suffering from unsightly “flashes.”

Once movement is added to the scene, the flashes turn into flickering that even AMD’s MSAA technology can’t fully get rid of. Nvidia’s MSAA implementation doesn’t have this problem. However, if the camera is positioned in such a way that polygons are stacked closely together, the z-buffer loses track and smaller surfaces can get lost with Nvidia’s MSAA.

Looking at the output from NX 8.0, there are some easily-noticeable differences that you'd spot right away, depending on the output options you use. These are consistent with the observations we just made, validating our impressions so far.

NX 8.0: Shaded

NX 8.0: Shaded + Edges + AA

Aliased edges are more tolerable in the workstation space, as opposed to the realism-dependent gaming market, so long as they're the price you pay for additional detail. Creo 2 makes a good example. There are significant differences between AMD and Nvidia in some places. My preference is the FirePro's wireframe output, which is sharper than Nvidia’s smooth picture.

Consumer Graphics Cards with Gaming Drivers

Time and time again, we've shown that desktop graphics cards with their gaming-optimized drivers don’t fare well when it comes to professional tasks. They're sometimes able to skate by in the private and semi-professional sector, depending on the application (and especially if it's DirectX-based). But as soon as you start messing with complex OpenGL applications, it's over.

AMD’s Catalyst driver fails miserably twice in a row. Nvidia’s GeForce driver doesn't run into those egregious display errors. However, it slows down so much that performance simply isn't usable.

In the example below, AMD completely drops the ball in a Creo 2 scene with transparent backgrounds. Our first example shows how parts of previous frames are retained in later frames.

The second example illustrates how the wireframe display doesn’t work when it comes to hiding invisible polygons (hidden), or anti-aliasing.

At this point, we'd like to speak up in defense of more entry-level workstation graphics cards, which often look weak in benchmark charts next to the highest-end boards. Most folks won't use models as complex as the ones tested in SPECviewperf 12, using all of the high-detail settings we benchmarked. Under less taxing conditions, it's often better to use a slower pro card than to try shoehorning a gaming board into a business machine.

20. SPECviewperf 12: A Much-Needed And Welcome Update

In the end, benchmarks like SPECviewperf 12 can only give us a snapshot of workstation-oriented graphics card performance, and they’ll never cover the entire range of applications, either. Still, this is a long-overdue update to a popular suite of tests. The inclusion of up-to-date professional software from a number of different fields makes it easier for us to estimate the performance of today's FirePro and Quadro boards.

Don't read these results as gospel, though. Eight titles with a handful of sub-tests aren't enough for judging a product fairly and completely, particularly when you're talking about the lower-end models that get brutalized by taxing geometry and the memory bandwidth demands applied by deliberately-complex workloads. In fact, for every market where workstation graphics are used, there are unique and specific needs that can't be accounted for in a general collection of tests.

SPECviewperf 12’s design is, and always has been, focused on the upper-middle and high-end segments. Think about that before writing off less expensive offerings as insufficient. Their target segments are barely, if at all, represented by this benchmark.

It'd be nice to shirk the eternal FirePro versus Quadro debate and simply let the benchmark results and screenshots speak for themselves. But really, both AMD and Nvidia have done their homework, and the products we tested don’t have any obvious flaws or weaknesses.

AMD, in particular, has noticeably improved its standing in the workstation graphics card market, increasing share with its FirePro boards. There are still a lot of professional applications lacking corresponding optimized drivers, particularly outside of the titles in SPECviewperf 12. But the list of those that are officially supported keeps growing.

Moreover, AMD's price/performance ratio in the low- and mid-range segments is already appealing, and we're not seeing a push to offer more value in the high-end space. That's where the company might be able to steal away some of Nvidia's business, especially under the titles really well-optimized in the drivers. Although this is bad news for Nvidia, the winner of a close race is always going to be you.

Bottom Line

SPECviewperf 12 might not be perfect, but perfection is almost impossible to achieve when it comes to evaluating workstation-class graphics cards. There are simply too many commonly-used professional apps out there that need to be included, and the benchmark's scope has to be limited by scope.

But the plan to update viewperf regularly, and thus keep it adjusted continuously to the newest demands and requirements, is more than welcome. It's also necessary for a suite that’s supposed to provide relevant results.