Skip to main content

Intel Xeon E5-2600 v2: More Cores, Cache, And Better Efficiency

Results: Compression

I typically think of 7-Zip as our best-threaded file compression benchmark. However, the fact that two Xeon E5-2687Ws finish first suggest that something else is limiting performance. All else being equal, we’d expect the Ivy Bridge-based version to win—it runs at higher clock rates, has more cache, and offers additional memory bandwidth.

In any case, the dual-processor workstations are at least notably quicker than one Core i7-4960X.

WinRAR is better known for favoring architectural tweaks that improve efficiency per clock cycle. Not surprisingly, the two Ivy Bridge-based CPUs finish in the lead, ahead of two Sandy Bridge-EP-based processors.

Our WinZip chart includes three separate benchmarks, and the very latest from Intel makes them difficult to interpret.

Let’s start with the longest bar, corresponding to the EZ test. This represents maximum compression. Our Core i7 and dual Sandy Bridge-EP-based Xeons score similarly. Meanwhile, the -2687W v2 crushes this test. We actually saw the same thing in Intel's 12-Core Xeon With 30 MB Of L3: The New Mac Pro's CPU?, and the benchmark is consistent.

Then there’s the general CPU benchmark, which is well-threaded in WinZip 18.0, and appears to reward both dual-processor workstations compared to the Core i7.

Finally, we have the OpenCL-accelerated test, which does run faster on the Core i7, but slows down on the dual-socket systems versus CPU-only processing. Even those slower results remain faster than the Core i7’s finish, though. Here’s my stab at an explanation: WinZip only offloads files larger than 8 MB to the graphics card for compression. Because our workload is a blend of file sizes, the OpenCL-accelerated files slow down the 16-core setups. Meanwhile the six-core -4960X does enjoy some speed-up from Nvidia’s Quadro FX 1800. Ultimately, though, the well-threaded compression engine still runs everything else through the Xeons faster.

  • GL1zdA1
    Does this mean, that the 12-core variant with 2 memory controllers will be a NUMA CPU, with cores having different latencies when accessing memory depending on which MC is near them?
    Reply
  • Draven35
    The Maya playblast test, as far as I can tell, is very single-threaded, just like the other 3d application preview tests I (we) use. This means it favors clock speed over memory bandwidth.

    The Maya render test seems to be missing O.o
    Reply
  • Cryio
    Thank you Tom's for this Intel Server CPU. I sure hope you'll make a review of AMD's upcoming 16 core Steamroller server CPU
    Reply
  • Draven35
    Tell AMD that.
    Reply
  • cats_Paw
    Dat Price...
    Reply
  • voltagetoe
    If you've got 3ds max, why don't you use something more serious/advanced like Mental Ray ? The default renderer tech represent distant past like year 1995.
    Reply
  • lockhrt999
    "Our playblast animation in Maya 2014 confounds us."@canjelini : Apart from rendering, most of tools in Maya are single threaded(most of the functionality has stayed same for this two decades old software). So benchmarking maya playblast is as identical as itunes encode benchmarking.
    Reply
  • daglesj
    I love Xeon machines. As they are not mainstream you can usually pick up crazy spec Xeon workstations for next to nothing just a few years after they were going for $3000. They make damn good workhorses.
    Reply
  • InvalidError
    @GL1zdA1: the ring-bus already means every core has different latency accessing any given memory controller.Memory controller latency is not as much of a problem with massively threaded applications on a multi-threaded CPU since there is still plenty of other work that can be done while a few threads are stalled on IO/data. Games and most mainstream applications have 1-2 performance-critical threads and the remainder of their 30-150 other threads are mostly non-critical automatic threading from libraries, application frameworks and various background or housekeeping stuff.
    Reply
  • mapesdhs
    Small note, one can of course manually add the Quadro FX 1800 to the relevant file
    (raytracer_supported_cards.txt) in the appropriate Adobe folder and it will work just
    fine for CUDA, though of course it's not a card anyone who wants decent CUDA
    performance with Adobe apps should use (one or more GTX 580 3GB or 780Ti is best).

    Also, hate to say it but showing results for using the card with OpenCL but not
    showing what happens to the relevant test times when the 1800 is used for CUDA
    is a bit odd...

    Ian.

    PS. I see the messed-up forum posting problems are back again (text all squashed
    up, have to edit on the UK site to fix the layout). Really, it's been months now, is
    anyone working on it?

    Reply