Autodesk 3ds Max 2014
Space Flyby
This is the 3ds Max test used for benchmarking CPUs here at Tom's Hardware. It's a fairly straightforward mental ray render with very little in the way of advanced settings. Thus, it finishes pretty quickly, even on our baseline machine. Dell's Precision T5600 winds up 2.24x faster than that system.
3ds Max: V-Ray

This is our Tom’s Hardware logo scene, which was originally created in LightWave 3D, imported into Max via FBX format, retextured, and then output with new settings in the popular V-Ray renderer from Chaos Group. Four frames are sampled from the animation to reflect a quartet of different frame content types. As you can see, they behave uniquely depending on the number of polygons in the scene, how much motion blur is applied, and whether the motion blur is linear. Dell shows up 4.3-4.6x faster than our baseline Xeon E3 box, thanks to its two eight-core CPUs and much higher memory bandwidth.
3ds Max: V-Ray RT
While I was messing around using V-Ray RT for another article, I did some digging in the 3ds Max settings and found that the color space wasn't set correctly for our benchmark, and the orientation of the light used to illuminate the car wasn't right, either. I also darkened up the tires, taking us from this:
To this:
It also affected our render times, though just slightly.

The update runs slightly slower on one machine and slightly faster on the other. Overall, Dell finishes 3.1x faster than the baseline box. That's pretty much the result we were expecting from a CUDA-accelerated workload shifting from Quadro 2000 to K5000. As a side effect, we're using a better render now, too.
3ds Max: iray

Our iray benchmark is a GPU-accelerated version of mental ray, using a scene provided by Autodesk. The Precision T5600 comes out 2.2x faster, which is a narrower victory than the other CUDA-based workload, presented above.
3ds Max DirectX Preview

This benchmark tests the performance of 3ds Max’s 3D display by playing back a preview of the entire THG Logo animation to our machine's RAM drive. It's a fairly accurate representation of 3ds Max's viewport, which is DirectX 11-based. While it benefits slightly from the T5600's faster GPU, it's largely bottlenecked by the process' single-threaded nature.
Autodesk Maya 2014
Maya: mental ray Rendering

Maya ships with mental ray as its renderer, and since our complex render test for 3ds Max is done in V-Ray, we're using mental ray in Maya. It’s the Tom's Hardware Logo scene again, retextured and with different settings (remember, these don't translate across apps). Autodesk's 2014 apps do come with an updated version of mental ray, but those features aren't being tested here. For example, it's now possible to offload your global illumination calculations onto the GPU, essentially generating a GI "pass" on the graphics card and using it in the software render. It's an interesting option to explore, though this benchmark is still entirely CPU-based.
The results are closer between these two machines than the other renders we've presented. The new version of mental ray seems to be much more efficient at calculating motion blur than its predecessor, more than halving the render time for frame 500. Faster render times on the same scene are always a good thing!
Maya: Playblast

Maya’s Playblast feature records a viewport to storage (or a RAM drive in our case) by grabbing the preview windows and spooling them out. Even with Maya 2014’s new DirectX 11 preview windows, the Playblast function is still so single-threaded that it limits the T5600's performance, allowing the baseline machine's higher-clocked Ivy Bridge-based processor to take a lead.
- Inside And Outside Of Dell's Precision T5600 Workstation
- Test System And Benchmarks
- Results: Adobe Creative Cloud
- Results: Autodesk 3ds Max And Maya
- Results: NewTek LightWave 3D 11.5, E-on Vue 11, And Blender
- Results: Digital Audio Workstation
- Results: LuxMark, Cinebench, SPECviewperf, and Euler3d
- Results: Media Encoding And Productivity
- Results: Synthetics
- Results: Compression
- Results: Storage
- The Precision T5600 Is Still At The Top Of Dell's Workstation Portfolio



Also, good job on the review as always.
It says this right beneath the graph:
The tests seem evenly split between single- and multi-threaded workloads, and some of them incur little or no hit from AA, which points to something other than the GPU bottlenecking performance. In fact, SolidWorks performs better with AA on. How odd is that?
It says this right beneath the graph:
The tests seem evenly split between single- and multi-threaded workloads, and some of them incur little or no hit from AA, which points to something other than the GPU bottlenecking performance. In fact, SolidWorks performs better with AA on. How odd is that?
Correct if I am wrong, but as far as I know the basic S*#tWorks is not optimized for multi-threading (hence I am only running an i7 3820 and anything higher would not benefit the performance). Now SW Simulations and PhotoView360 is a different story.
I just might run SpecviewPerf 11 on my system to see how it performs. To others it might matter, but in my design, I could care less about AA; I am just happy when SolidWorks does not crash.
Photoview 360's renderer is written by the guys at Luxology, based on the renderer from their 3d application Modo, and is very well multithreaded.
Tuffjuff: I asked myself the same question about the RAM. The machine would have performed vastly better in the AE tests with 32 GB, because i could have used all of the physical CPU cores.
A very good and welcome review. The systems compared were, however, not at the same level relative to their categories. More would have been revealed if the P500X used something like a GTX 680 (In other words,about 2nd from the top of their respective lines) rather than a Quadro 2000 which is two generations past and in effect, just a much lower line ancestor of the K5000. I imagine these tests are complex and time-consuming, but it would have provided perspective if at least one direct competitor from HP and/or Lenovo appeared.
A couple of comments on the T5600 design.
1. I can understand the trends toward more compact cases, and even the need to pander to styling and branding, but the TX600 series is inexecusably short on drive bays. My mother's 2010 dual-core Athlon X2 in a $39 case, "Grandma's TurboKitten 3000", has more expansion bays. Still, the T5600 situation is better than the impending Mac Dustbin Pro.
2. The brutalist architecture may have convenient handles, but to me is a clunker, both visually and in features. I don't know anyone in architecture, industrial design, graphic design, animation, or video editing that doesn't keep their workstation vertically, who doesn't also hate vertical optical drives, and also often have two of those plus a card reader. Also, As Jon Carroll mentions, this is short on front USB 3.0 ports. I would question a workstation at this level without at least three USB 3.0 ports on the front. There are never enough USB ports on a workstation. The Precision T5400 has two front, six rear, and two on the back of the (SK-8135) keyboard! USB 2.0 ports and I still have to add a four-port hub on one of the back ports.
Oh, and Jon, the indentation on the top of the T5600 is not for car keys- that's where you would set your short-cabled USB external drive(s)- and flash drives-if there were enough USB 3.0 ports. My Precision T5400 I think is wearing in an indentation in that exact location from a WD Passport.
3. As tuffjuff also comments, 16GB of RAM is not nearly enough for this kind of machine. Dual CPU systems divide the RAM equally between the processors- these motherboards have separate slots and special sequences of symmetrical positioning. This means that the test system had, in effect, only 8GB of RAM per CPU or as I like to express it- 1GB per core. There's a reason the T5600 .supports 128GB and the T7600 can use 512GB of RAM- Windows, programs and files are big and in these systems, a lot of programs are running at once. I use a formula of 3GB for the OS, 2GB for each simultaneous application and 3GB for open files. As my workstations often use five or six applications plus a constant Intertubes and Windows Exploder, sorry, Explorer, my new four-core HP z420 has 24GB of RAM (6GB/core). If I had a dual E5-2687w system, given there are so many more cores to feed, I would therefore consider 64GB a reasonable level- 32GB per CPU (4GB/core).
4. The most worrying comments in the review concerns the noise. Of course, a system with two 150W CPU's and school bus- sized GPU needs good airflow, but this one devotes so much of the facade to the grille that the optical drive has to be in the stupid vertical position, and apparently this openness that lets the air in also lets the noise out. But, in my view, noise from a workstation is close to being a deal-breaker. This is another reason why the vertical drive is so silly- few put their workstation horizontally on the desktop right in front of them because of the noise.
Dell apparently wants to ease out of the declining PC business, and these kinds of design decisions might help that process. I think though that Dell, plus Autodesk and Adobe that want to force eternal cloud computing subscription fees are going to find many, many workstation users that will object and going to buy AutoCad 2014 and CS6, run them on Precision T7500's, and preserve the DVD's in hermetically sealed containers. I, for one, will never, ever be sending my industrial design files into the ether and onto other firms' servers.
This assessment is a good demonstration of the way in which workstations and creation applications continue to evolve each other. However, as many workstations applications have become far more capable, especially in 3D modeling and simulation, there is still a vast under-utilization of multiple cores in those applications. It's not accidental that the T5600 review emphasized rendering as that it's an example where the core applications have adapted to the availability of multiple cores and also can take advantage of GPU co-processing. It's an odd thing and a puzzle> make a model in Maya and run simulations in Solidworks or Inventor essentially on a single core, but make a rendering of that model using fourteen cores. I make Sketchup Pro models that when they go over about 20MB become almost unusable without navigating in monochrome and clever, careful, and constant fussing with layers. Rendering is very calculation intensive, but so are thermal, gas flow, atmospheric, molecular biological, and structural modeling and simulations.
The T5600 review, as it's concentrates on applications that reveal the whole capabilities of the $4,000 of CPU's and $1,800 of CUDA cores also reveals this fundamental engineering hollow in workstation applications > and indeed in another important realm. I'm not a gamer, but on this site I can feel gamers wondering the same thing as workstation wonks > Software companies > there are billions of CPU cores waiting for something to do! Why the hell aren't there more multi-core applications?
Cheers,
BambiBoom
PS>
1. Dell Precision T5400 (2009)> 2X Xeon X5460 quad core @3.16GHz > 16 GB ECC 667> Quadro FX 4800 (1.5GB) > WD RE4 / Segt Brcda 500GB > Windows 7 Ultimate 64-bit > HP 2711x 27" 1920 x 1080 > AutoCad, Revit, Solidworks, Sketchup Pro, Corel Technical Designer, Adobe CS MC, WordP Office, MS Office > architecture, industrial design, graphic design, rendering, writing
2. HP z420 (2013)> Xeon E5-1620 quad core @ 3.6 / 3.8GHz > 24GB ECC 1600 > Firepro V4900 (Soon Quadro K4000) > Samsung 840 SSD 250GB / Seagate Barracuda 500GB > Windows 7 Professional 64 > to be loaded > AutoCad, Revit, Inventor, Maya (2011), Solidworks 2010, Adobe CS4, Corel Technical Design X-5, Sketchup Pro, WordP Office X-5, MS Office
I was trying to understand what the chart meant and read it three times before I realized it was a template leftover.
bambiboom, I feel your pain. Recently I finished sorting out an After Effects
system for someone. Runs great with a 3930K @ 4.7, Quadro 4000 and
three GTX 580 3GB cards for CUDA, but it's crazy that numerous plugins
(both native and 3rd-party) are only single-threaded. In one sequence,
render performance is excellent during scenes with heavy raytracing, but
then it grinds to a halt when a Shatter plugin kicks off - one can see in
the usage graphs that the GPUs aren't being used and only 1 CPU core
is active. One of the particle effects plugins suffers from a similar issue.
Shame about the noise about. I built a Dell T7500 a while ago with two
XEON X5570s and 48GB RAM. It runs virtually silent; so quiet infact that
I'm prone to forget it's even turned on. I'm surprised Dell haven't focused
more on this area with the T5600 since noise is certainly a factor for most
pro users I know, unless the systems are for some reason in a machine
room instead of infront of them (more common with Discreet setups).
Ian.
Ditto that, any workstation that would need 16 cores, would very likely make use of far more then 16GB of ram.
an example 20 second clip, the rendering of which uses about 24GB RAM. Other
clips use more than 40GB. This is why having lots of RAM is so critical for AE systems.
Varies by application.
Systems used for textile printing can use large datasets, especially those for carpet
printing. Further up the scale, GIS systems typical use tens of GB, while at the top-end
of the scale are tasks employing datasets that are hundreds of GB (defense imaging,
medical imaging, and again GIS). That's when one tends to use shared memory systems
instead which can have multiple TBs of RAM, plus lots of CPUs and I/O to handle the
load.16GB is nothing in the grand scheme of things, but it is a bit low for a modern dual-
socket XEON machine.
Ian.
A very good and welcome review. The systems compared were, however, not at the same level relative to their categories. More would have been revealed if the P500X used something like a GTX 680 (In other words,about 2nd from the top of their respective lines) rather than a Quadro 2000 which is two generations past and in effect, just a much lower line ancestor of the K5000. I imagine these tests are complex and time-consuming, but it would have provided perspective if at least one direct competitor from HP and/or Lenovo appeared.
A couple of comments on the T5600 design.
1. I can understand the trends toward more compact cases, and even the need to pander to styling and branding, but the TX600 series is inexecusably short on drive bays. My mother's 2010 dual-core Athlon X2 in a $39 case, "Grandma's TurboKitten 3000", has more expansion bays. Still, the T5600 situation is better than the impending Mac Dustbin Pro.
2. The brutalist architecture may have convenient handles, but to me is a clunker, both visually and in features. I don't know anyone in architecture, industrial design, graphic design, animation, or video editing that doesn't keep their workstation vertically, who doesn't also hate vertical optical drives, and also often have two of those plus a card reader. Also, As Jon Carroll mentions, this is short on front USB 3.0 ports. I would question a workstation at this level without at least three USB 3.0 ports on the front. There are never enough USB ports on a workstation. The Precision T5400 has two front, six rear, and two on the back of the (SK-8135) keyboard! USB 2.0 ports and I still have to add a four-port hub on one of the back ports.
Oh, and Jon, the indentation on the top of the T5600 is not for car keys- that's where you would set your short-cabled USB external drive(s)- and flash drives-if there were enough USB 3.0 ports. My Precision T5400 I think is wearing in an indentation in that exact location from a WD Passport.
3. As tuffjuff also comments, 16GB of RAM is not nearly enough for this kind of machine. Dual CPU systems divide the RAM equally between the processors- these motherboards have separate slots and special sequences of symmetrical positioning. This means that the test system had, in effect, only 8GB of RAM per CPU or as I like to express it- 1GB per core. There's a reason the T5600 .supports 128GB and the T7600 can use 512GB of RAM- Windows, programs and files are big and in these systems, a lot of programs are running at once. I use a formula of 3GB for the OS, 2GB for each simultaneous application and 3GB for open files. As my workstations often use five or six applications plus a constant Intertubes and Windows Exploder, sorry, Explorer, my new four-core HP z420 has 24GB of RAM (6GB/core). If I had a dual E5-2687w system, given there are so many more cores to feed, I would therefore consider 64GB a reasonable level- 32GB per CPU (4GB/core).
4. The most worrying comments in the review concerns the noise. Of course, a system with two 150W CPU's and school bus- sized GPU needs good airflow, but this one devotes so much of the facade to the grille that the optical drive has to be in the stupid vertical position, and apparently this openness that lets the air in also lets the noise out. But, in my view, noise from a workstation is close to being a deal-breaker. This is another reason why the vertical drive is so silly- few put their workstation horizontally on the desktop right in front of them because of the noise.
Dell apparently wants to ease out of the declining PC business, and these kinds of design decisions might help that process. I think though that Dell, plus Autodesk and Adobe that want to force eternal cloud computing subscription fees are going to find many, many workstation users that will object and going to buy AutoCad 2014 and CS6, run them on Precision T7500's, and preserve the DVD's in hermetically sealed containers. I, for one, will never, ever be sending my industrial design files into the ether and onto other firms' servers.
This assessment is a good demonstration of the way in which workstations and creation applications continue to evolve each other. However, as many workstations applications have become far more capable, especially in 3D modeling and simulation, there is still a vast under-utilization of multiple cores in those applications. It's not accidental that the T5600 review emphasized rendering as that it's an example where the core applications have adapted to the availability of multiple cores and also can take advantage of GPU co-processing. It's an odd thing and a puzzle> make a model in Maya and run simulations in Solidworks or Inventor essentially on a single core, but make a rendering of that model using fourteen cores. I make Sketchup Pro models that when they go over about 20MB become almost unusable without navigating in monochrome and clever, careful, and constant fussing with layers. Rendering is very calculation intensive, but so are thermal, gas flow, atmospheric, molecular biological, and structural modeling and simulations.
The T5600 review, as it's concentrates on applications that reveal the whole capabilities of the $4,000 of CPU's and $1,800 of CUDA cores also reveals this fundamental engineering hollow in workstation applications > and indeed in another important realm. I'm not a gamer, but on this site I can feel gamers wondering the same thing as workstation wonks > Software companies > there are billions of CPU cores waiting for something to do! Why the hell aren't there more multi-core applications?
Responding as you listed:
The P500X is our baseline workstation. Other systems will be compared against it. It came with a Quadro 2000 when we got it last year, so that is what is in it. Since its a workstation, and this is a review of an integrated workstation build, its going to use a workstation GPU.
1: and has twice as many processors as the new mac pro, more drive bays, actual expansion slots, a bigger selection of graphics cards, etc...
2: yes.
(looks over at the editor about the keys remark...)
3: The thing is, it isn't "8 GB of memory per CPU". The CPUs can both access the memory that the other is controlling across the TPI connection between the two processors. Most applications also multithread and use the memory en masse, it is only After Effects rendering that wants to allocate memory per-core.
We're looking at adding some fluid simulation and such benchmarks later. I'd also like to add CAD and engineering software.
Nice review and may I say thank you for finally including Vray in the benchmarks! I really hope this becomes a standard part of your benchmark suite moving forward as it probably most commonly used rendering engine in the architectural viz industry.
We added the VRay benchmarks to our workstation suite last fall when we reviewed the P500X and P900DX.