Introducing Our Benchmark System
SPECviewperf 11, introduced back in 2010, has been showing its age for a while. It wasn't really giving us a realistic-looking picture of modern workstation graphics hardware and driver performance anymore. The applications composing it were just too old. Moreover, AMD and Nvidia were thoroughly optimizing for the specific workloads, throwing off the suite's value.
So, the Standard Performance Evaluation Corporation (SPEC) chose to step up its game with a much-needed update. After all, SPEC’s mission is to create relevant benchmarks that closely adhere to current industry standards.
AMD and Nvidia are both members of SPEC, allowing them to exert some influence over the new collection of tests. The idea is that no company gets an unfair advantage. We'll see how that works out in practice, though.
We added benchmark results for the Quadro K6000, which naturally excels in many of this suite's sub-tests. Bear in mind that Nvidia's flagship is a purpose-built board, though, selling for $5000 on Newegg. Unfortunately, SPECviewperf doesn't include any general-purpose compute workloads, which is where the Quadro K6000 would undoubtedly excel most.
We wanted to run tests using SPECviewperf 12 as quickly as possible in order to provide a baseline look at workstation-class graphics performance, before drivers start getting optimized specifically for the test's various workloads (similar to what happened with SPECviewperf 11). To that end, it's also important for us to gauge how relevant the performance of SPECviewperf 12 is compared to the software it claims to represent.
Important Preamble: SPECviewperf 12 is a demanding benchmark, targeting upper-middle and high-end workstation-class graphics cards. In tests that employ extremely complex models or workloads with immense memory requirements, the lower-end boards are at a disadvantage. Consequently, the results for those entry-level products need to be considered in relative terms; they're simply not meant to handle tasks like this.
A carefully-picked test system is designed to facilitate analysis of CPU scaling based on cores, threads, and clock rates. For most of the benchmarks, the processor is overclocked to prevent platform-limited situations. However, I also have a complete page dedicated to processor-oriented testing for a more complete performance picture.
|CPU and Cooler||Intel Core i7 3770K (Ivy Bridge), Overclocked to 4.5 GHzCorsair H100i Compact Water Cooler (Gelid GC Extreme)|
|Motherboard||Gigabyte G1. Sniper 3|
|RAM||32 GB (4 x 8 GB) Corsair Dominator Platinum DDR3-2133|
|SSD||2 x Corsair Neutron 480 GB|
|Power Supply||Corsair AX1200i|
|Operating System||Windows 7 x64 Ultimate SP1|
|Drivers||AMD FirePro 13.251.1Nvidia Quadro 332.21|
|Other Equipment||Microcool Banchetto 101HAMEG HMO 1024 Four-Channel Digital Memory OscilloscopeHAMEG HZO50 (1 mA-30 A, 100 kHz DC, Resolution 1 mA)HAMEG HMC 8012HAMEG HZ154 (1:1, 1:10), Assorted Adapters|
Three Gaming Cards (For Comparison, Of Course)
Admittedly, it's usually pointless to throw gaming-oriented graphics cards into a round-up of professional products. Software drivers are such a big part of what makes a FirePro or Quadro card distinct, that we know the Radeons and GeForces just won't fare as well. Then again, it's still important to know how desktop boards are represented in performance and image quality comparisons. Are there certain applications that don't necessitate workstation-class hardware? That's what we want to know. So, we're throwing in three gaming cards as well. They'll be the gray bars in the benchmark results graphs.
Let’s jump right in with the first of eight benchmark sections.
When AMD releases the mighty 16GB FirePro 9100 based on Radeon R9-290X core will be competitive to the Quadro K6000 in performance.
I find that internal benchmarking the only way to really understand the value of workstation cards. W7000 for example - it was awesome in our internal testing. While good, the cards is much better than these benchmark results suggest. Not sure why I would look at another SPEC benchmark when I will still need to test the cards in-house to really know how good they are for our applications and models.
Unfortunately, testing in the real applications (using something like APCapc) requires actual licenses of the software apps. Many of these vendors (CATIA, NX, etc) simply don't make temp licenses available for reviewers/journalists or other non-users.
VP12 should be quite good enough to help make informed evaluations of GPU hardware. If you are concerned about seeing in-application performance measurements for particular apps, you can ususually find the data with a bit of googling, although take results you find posted on the internet by "regular Joe's" with a grain of salt.
tsk tsk tsk
About CPU Scaling: "In the second set of our scaling results, only SolidWorks responds to CPU frequency. Core and thread count don't make a difference.¨
This is not entirely true. It goes as far as 10% at 4.5 GHz.