Intel Xeon E5-2600 v2: More Cores, Cache, And Better Efficiency

Ivy Bridge-EP: Faster And More Efficient On The Same Platform

It’s uncommon for professionals to pull one-generation-old CPUs out of their workstations and upgrade, but that’s technically what Intel’s Xeon E5-2600 v2 line-up lets you do. The company successfully shifted from 32 to 22 nm manufacturing, simultaneously enabling more complex processors (with up to 12 physical cores and 30 MB of shared L3 cache) that fit within previously-established thermal envelopes and drop into existing LGA 2011-equipped motherboards, after a firmware update, of course.

Beyond the increases to core count, cache, and clock rates, the Xeon E5-2600 v2s also center on the Ivy Bridge architecture. So, there is a handful of tweaks that improve per-cycle performance compared to Sandy Bridge as well. Finally, certain SKUs feature more aggressive data rates, pushing memory support to DDR3-1866 in some cases.

None of the workloads we ran need that much bandwidth. However, our benchmarks have no trouble illustrating where the Xeon E5-2687W v2 is better than its predecessor. Higher Turbo Boost frequencies mean the second-gen model wins in single-threaded tests. Even the clock rates in fully-loaded situations are an improvement, so you get more performance there, too. And regardless of the benchmark, power consumption is lower on the system with Ivy Bridge-EP-based CPUs, despite the consistent 150 W TDP.

Sure, you could save a ton of money and use even less energy by going with Intel’s Core i7-4960X. And in some cases, that actually makes sense. An increasing number of applications are being optimized for heterogeneous computing, which might exploit a highly parallelized graphics processor for massive performance gains in specific tasks. In those titles, throwing more money at a faster GPU will yield bigger gains than a second CPU. Then again, we just saw several examples of two Xeon E5s cutting the processing time of compile jobs, OCR workloads, and renders in half (or better).

I haven’t been very nice to Intel’s desktop team for a couple of subsequent generations. The step from Sandy Bridge to Ivy Bridge was disappointing for enthusiasts. Similarly, Haswell didn’t give us much more to be excited about. Same four cores, same 8 MB of shared L3, same 16 lanes of PCIe, and minor speed-ups attributable to architectural tweaks. Ho hum.

But in the Xeon world, Intel takes the thermal headroom freed up by its advanced manufacturing and more thoroughly utilizes it, leaving customers to choose whether they need more cores, higher clocks, or simply comparable performance at reduced power consumption. That’s the kind of innovation enthusiasts want to see more of.

Chris Angelini
Chris Angelini is an Editor Emeritus at Tom's Hardware US. He edits hardware reviews and covers high-profile CPU and GPU launches.
  • GL1zdA1
    Does this mean, that the 12-core variant with 2 memory controllers will be a NUMA CPU, with cores having different latencies when accessing memory depending on which MC is near them?
  • Draven35
    The Maya playblast test, as far as I can tell, is very single-threaded, just like the other 3d application preview tests I (we) use. This means it favors clock speed over memory bandwidth.

    The Maya render test seems to be missing O.o
  • Cryio
    Thank you Tom's for this Intel Server CPU. I sure hope you'll make a review of AMD's upcoming 16 core Steamroller server CPU
  • Draven35
    Tell AMD that.
  • cats_Paw
    Dat Price...
  • voltagetoe
    If you've got 3ds max, why don't you use something more serious/advanced like Mental Ray ? The default renderer tech represent distant past like year 1995.
  • lockhrt999
    "Our playblast animation in Maya 2014 confounds us."@canjelini : Apart from rendering, most of tools in Maya are single threaded(most of the functionality has stayed same for this two decades old software). So benchmarking maya playblast is as identical as itunes encode benchmarking.
  • daglesj
    I love Xeon machines. As they are not mainstream you can usually pick up crazy spec Xeon workstations for next to nothing just a few years after they were going for $3000. They make damn good workhorses.
  • InvalidError
    @GL1zdA1: the ring-bus already means every core has different latency accessing any given memory controller.Memory controller latency is not as much of a problem with massively threaded applications on a multi-threaded CPU since there is still plenty of other work that can be done while a few threads are stalled on IO/data. Games and most mainstream applications have 1-2 performance-critical threads and the remainder of their 30-150 other threads are mostly non-critical automatic threading from libraries, application frameworks and various background or housekeeping stuff.
  • mapesdhs
    Small note, one can of course manually add the Quadro FX 1800 to the relevant file
    (raytracer_supported_cards.txt) in the appropriate Adobe folder and it will work just
    fine for CUDA, though of course it's not a card anyone who wants decent CUDA
    performance with Adobe apps should use (one or more GTX 580 3GB or 780Ti is best).

    Also, hate to say it but showing results for using the card with OpenCL but not
    showing what happens to the relevant test times when the 1800 is used for CUDA
    is a bit odd...


    PS. I see the messed-up forum posting problems are back again (text all squashed
    up, have to edit on the UK site to fix the layout). Really, it's been months now, is
    anyone working on it?