
Today’s story forces us to consider one consequence of a growing emphasis on heterogeneous computing. As we offload parallel tasks to on-die or discrete graphics engines, there’s less for many-core CPUs to do.
Although it’s tempting to look at our results and assume that CUDA acceleration is helping normalize performance as the Quadro FX 1800 becomes a bottleneck, Nvidia’s older pro board isn’t on Adobe’s list of supported add-in cards. We double-checked and verified that there is no GPU activity during the test; it’s CPU-only.
We also know from past stories that our Premiere Pro rendering tasks do utilize many cores. It’s probable that our benchmark isn’t complex enough to fully demonstrate what two eight-core processors can do. The Paladin test we used previously was intensive, but designed for Premiere Pro CS5. Two generations later, our Hollywood sequence just isn’t the same.

The same goes for After Effects, which can be accelerated by CUDA/OpenCL-compatible cards, but doesn’t natively support our Quadro FX 1800. In the past, this test was actually bottlenecked by three QuickTime clips, which couldn’t be threaded. We replaced those with PNG sequences to address that limitation. Now we see 100% utilization, though scaling is not evident based on host processor performance.

Finally, by the time we get to Photoshop CC, OpenCL support is enabled on our Quadro FX 1800. Interestingly, though, backing the Nvidia card with more x86 cores doesn’t help improve the performance of accelerated filters. In fact, the opposite is true: both dual-CPU workstations are slower than the Core i7-based box.
The situation reverses when we execute a series of threaded filters. The two Xeon E5-2687W v2s do their job in half the time of one Core i7-4960X. Chalk this up as an application where it pays to know where to spend money on hardware. Certain filters are going to push mainstream CPUs with high clock rates. Others will favor massively parallel configurations. And a few more are optimized for OpenCL.
- All About Intel's Ivy Bridge-EP-Based Xeon CPUs
- Test Setup And Benchmarks
- Results: Sandra 2014 And 3DMark
- Results: Adobe CC
- Results: Media Encoding
- Results: Rendering
- Results: Productivity
- Results: Compression
- Power Consumption And Efficiency
- Ivy Bridge-EP: Faster And More Efficient On The Same Platform
The Maya render test seems to be missing O.o
(raytracer_supported_cards.txt) in the appropriate Adobe folder and it will work just
fine for CUDA, though of course it's not a card anyone who wants decent CUDA
performance with Adobe apps should use (one or more GTX 580 3GB or 780Ti is best).
Also, hate to say it but showing results for using the card with OpenCL but not
showing what happens to the relevant test times when the 1800 is used for CUDA
is a bit odd...
Ian.
PS. I see the messed-up forum posting problems are back again (text all squashed
up, have to edit on the UK site to fix the layout). Really, it's been months now, is
anyone working on it?
The 3dsMax test does use mental ray. Our Maya render test also uses mr, and the other Max render test uses VRay.
Again, obviously, I know the productivity benches are what's important here. I know no one's gaming on a server processor, like ever. But while you've got a review sample, why not experiment a little?
Great review as always.
I have an HP z600 with 2x 2.26 Ghz Xeon 5520s and 12 GB RAM, 2x 500 GB hard drives... total invested: $550. Its my personal 3d machine and benchmark development machine. Going to put it up to 24 GB shortly.