Sign in with
Sign up | Sign in

Power And Heat

Overclocking Intel’s Xeon E5620: Quad-Core 32 nm At 4+ GHz
By

To be honest, I didn’t tackle all of those benchmarks thinking that Intel’s Xeon E5620 was going to somehow magically outperform a six-core desktop chip. I also didn’t think 4 MB of additional L3 cache was going to put a huge lead over the 45 nm Bloomfield design, with its 8 MB repository.

Rather, I was hoping to see higher frequencies at lower operating temperatures, perhaps with a little power-savings sprinkled on top.

I took ambient temperature readings in between each result using an Extech TM200 thermometer. You can disregard those orange and red bars—the GPU remains fairly consistent at idle and load, regardless of the processor behind it. More interesting are the blue and green bars.

It comes as little surprise that the stock Xeon E5620 is an example of low thermal output thanks to conservative clocks and low operating voltage.

The overclocked Xeon runs significantly warmer due to a 1.6 GHz frequency increase and a higher fixed voltage.

Two additional cores mean that the Core i7-970 gets hotter still—about 10% warmer than the quad-core Xeon.

And an older manufacturing process translates to significantly hotter idle and load temperatures for the overclocked Core i7-930. And when you consider the ambient temp hovered around 32 degrees in my lab, adding 57 to that sticks the loaded Bloomfield core up around 90 degrees. That’s uncomfortably warm, long-term. In fact, I’d probably recommend dialing back to 3.73 GHz or so and dialing back voltage a bit in order to hopefully get a little more useful life out of the chip.

At idle, the overclocked Gulftown-based processors use the same amount of power. The stock Xeon is quite a bit more conservative with its consumption. And the Core i7-930 is only moderately higher than the other 4 GHz CPUs.

Load the CPUs down, though, and you get another story entirely. The stock Xeon E5620 is still fairly power-friendly. Overclocked and overvolted, consumption rises by nearly 100 W. Yet, the Xeon E5620 still uses 50 W less than the overclocked Core i7-970. And the Xeon uses roughly 60 W less than the other overclocked quad-core chip in this comparison, Intel’s Core i7-930.

Those results translate over to CPU+GPU power measurements, too. The overclocked Xeon E5620 uses 60 W less than the 4 GHz Core i7-930 setup. So, you’re getting roughly the same performance, significant power savings, and less heat output for a $100 price premium.

Ask a Category Expert

Create a new thread in the Reviews comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display all 56 comments.
This thread is closed for comments
  • 0 Hide
    intelx , October 21, 2010 7:04 AM
    i wish it had higher multiplier it would of been a great processor to recommend than paying $10000 for the i7 970.
  • 7 Hide
    JOSHSKORN , October 21, 2010 7:15 AM
    I wonder if it's possible and also if it'd be useful to do a test of various server configurations for game hosting. Say for instance we want to build a game server and don't know what parts are necessary for the amount of players we want to support without investing too much into specifications we don't necessarily need. Like say I hosted a 64-player server of Battlefield or CoD or however the max amount of players are. Would a Core i7 be necessary or would a Dual-Core do the job with the same overall player experience? Would also want to consider other variables: memory, GPU. I realize results would also vary depending on the server location, its speed, and the player's location and speed, too, along with their system's specs.
  • 2 Hide
    cangelini , October 21, 2010 7:28 AM
    JOSHSKORNI wonder if it's possible and also if it'd be useful to do a test of various server configurations for game hosting. Say for instance we want to build a game server and don't know what parts are necessary for the amount of players we want to support without investing too much into specifications we don't necessarily need. Like say I hosted a 64-player server of Battlefield or CoD or however the max amount of players are. Would a Core i7 be necessary or would a Dual-Core do the job with the same overall player experience? Would also want to consider other variables: memory, GPU. I realize results would also vary depending on the server location, its speed, and the player's location and speed, too, along with their system's specs.


    Josh, if you have any ideas on testing, I'm all ears! We're currently working with Intel on server/workstation coverage (AMD has thus far been fairly unreceptive to seeing its Opteron processors tested).

    Regards,
    Chris
  • 3 Hide
    Anonymous , October 21, 2010 8:53 AM
    You could setup a small network with very fast LAN speeds (10Gbps maybe?). You can test ping and responsiveness on the clients, and check CPU/memory usage on the server. Eliminating the bottleneck of the connection and testing many different games with dedicated servers one can actually get a good idea of what is needed to eliminate bottlenecks produced by the hardware itself.
  • 8 Hide
    Moshu78 , October 21, 2010 9:39 AM
    Dear Chris,

    thank you for the review but your benchmarks prove that you were GPU-bottlenecked almost all time.
    Letme explain: i.e. Metro 2033 or Just Cause 2... the Xenon running at 2.4 GHz provided the same FPS as when it ran at 4 GHz. That means your GPU is the bottleneck since the increase in CPU speed therefore the increase in the number of frames sent to the GPU for processing each second does not produce any visible output increase... so the GPU has too much to process already.
    I also want to point out that enabling the AA and AF in CPU tests puts additional stress on the GPU therefore bottlenecking the system even more. It should be forbidden to do so... since your goal is to thest the CPU not the GPU.

    Please try (and not only you, there is more than 1 article at Tom's) so try to reconsider the testing methodology, what bottleneck means and how can you detect it and so on...

    Since the 480 bottlenecked most of the gaming results are useless except for seeing how many FPS does a GF480 provide in games, resolutions and with AA/AF. But that wasn't the point of the article.

    LE: missed the text under the graphs... seems you are aware of the issue. :)  Still would like to see the CPU tests performed on more GPU muscle or on lower resolutions/older games. This way you'll be able to get to the real interesting part: where/when does the CPU bottleneck?
  • 5 Hide
    Anonymous , October 21, 2010 10:45 AM
    Looks to me to be a pointless exercise. I have been running an i7-860 @ 4.05 Ghz and low temps for more than a year now so why pay for a motherboard that expensive plus the chip?
  • 0 Hide
    Cryio , October 21, 2010 11:09 AM
    I have a question. Maybe two. First: Since when Just Cause 2 is a DX11 game? I knew it was only DX10/10.1 . And even if it is [though I doubt it], what are the differences between the DX10 and 11 versions?
  • -7 Hide
    blibba , October 21, 2010 11:11 AM
    Note: Higher clocked Xeons are available.
  • 3 Hide
    omoronovo , October 21, 2010 11:32 AM
    blibbaNote: Higher clocked Xeons are available.


    However, I'm sure everyone is aware of how sharply the price of Xeons rise above the lowest-of-the-low. I expect a Xeon capable of 4.5ghz (a good speed to aim for with a 32nm chip and good cooling), you would already be over the costs of purchasing a 970/980x/990x, especially considering how good a motherboard you would need to get - a Rampage III extreme is possibly one of the most expensive X58 boards on the market, offsetting most of the gains you'd get over a 45nm chip and a more wallet friendly board - such as the Gigabyte GA-X58A-UD3R.
  • 0 Hide
    compton , October 21, 2010 12:03 PM
    This is one of the best articles in some time. I went AMD with the advent of the Phenom IIs despite never owning or using them previously, and I didn't once long going back to Intel for my processor needs. But I think that may have changed with the excellent 32nm products. The 980X might be the cat's pajamas, but $1000 is too much unless you KNOW you need it (like 3x SLI 480s, or actual serious multithreaded workloads when TIME = $$$). The lowly i3 has seriously impressed the hell out of me for value/performance, heat, and price/performance. Now, this Xeon rears it's head. While still pricey in absolute terms, it is still a great value play. Intel has earned my business back with their SSDs -- now might be the time to get back in on their processors, even if Intel's content to keep this chip in the Xeon line. Thanks for the illumination.
  • 0 Hide
    K2N hater , October 21, 2010 12:27 PM
    Interesting article. I come to the conclusion we could build a 2P Xeon box to overclock for a similar price than a single 980X while being clearly cheaper to upgrade and having 8 physical cores +8 HT cores.

    By the way, concerning power efficiency the top pick is the L5640. While not a cost-effective processor a 60W TDP for its 6 cores is quite impressive.
  • 2 Hide
    Anonymous , October 21, 2010 1:01 PM
    There is one additional benefit to using the Xeon. You can actually use ECC memory on your consumer class Intel system. With the huge amounts of memory (thus much greater chances of a memory error) we use these days on our systems I find it hard to believe anyone would trust their main systems to non-ECC memory.

    Unfortunately Intel has used their regained near monopoly position to take away that option from their consumer chips. Until they see the light I've been force to use otherwise less powerful AMD CPUs on my main systems and recommend likewise to my clients and acquaintances.
  • 0 Hide
    nevertell , October 21, 2010 1:08 PM
    Moshu78Dear Chris,thank you for the review but your benchmarks prove that you were GPU-bottlenecked almost all time.Letme explain: i.e. Metro 2033 or Just Cause 2... the Xenon running at 2.4 GHz provided the same FPS as when it ran at 4 GHz. That means your GPU is the bottleneck since the increase in CPU speed therefore the increase in the number of frames sent to the GPU for processing each second does not produce any visible output increase... so the GPU has too much to process already.I also want to point out that enabling the AA and AF in CPU tests puts additional stress on the GPU therefore bottlenecking the system even more. It should be forbidden to do so... since your goal is to thest the CPU not the GPU.Please try (and not only you, there is more than 1 article at Tom's) so try to reconsider the testing methodology, what bottleneck means and how can you detect it and so on...Since the 480 bottlenecked most of the gaming results are useless except for seeing how many FPS does a GF480 provide in games, resolutions and with AA/AF. But that wasn't the point of the article.LE: missed the text under the graphs... seems you are aware of the issue. Still would like to see the CPU tests performed on more GPU muscle or on lower resolutions/older games. This way you'll be able to get to the real interesting part: where/when does the CPU bottleneck?

    But they're testing whether or not is necessary to use these kinds of cpus in gaming PC's, and for that, you do need to enable gamy setups.
  • 1 Hide
    elbert , October 21, 2010 1:08 PM
    Pretty good article. I wonder how a AMD Opteron 6128 Magny-Cours would stack up? Could you try OCing the 6128? Its only $275 on newegg.
    http://www.newegg.com/Product/Product.aspx?Item=N82E16819105266

    I know the motherboard tho is going to be costly but the Rampage III Formula isn't cheap either. If ASUS would add some OCing options to the ASUS KGPE-D16 would put a smile on my face. The ASUS KGPE-D16 would be a nice SLI motherboard for this test because its an X16 PCIE. I think it would be easier to get ASUS to fix OCing with this mobo than get Intel to make an enthusiast xeon.
    http://www.newegg.com/Product/Product.aspx?Item=N82E16813131643
  • 2 Hide
    MU_Engineer , October 21, 2010 1:55 PM
    blibbaNote: Higher clocked Xeons are available.


    The point of the E5620 is to get a 32 nm LGA1366 chip for less than the i7 970 sells for. The only other 32 nm Xeon that sells for less than $870 is the Xeon E5630, which is merely 133 MHz faster than the E5620 but costs a couple hundred bucks more. All of the rest of the 32 nm Xeons are very expensive and more expensive than the i7 970 and i7 980X.

    elbertPretty good article. I wonder how a AMD Opteron 6128 Magny-Cours would stack up? Could you try OCing the 6128? Its only $275 on newegg.http://www.newegg.com/Product/Prod [...] 6819105266I know the motherboard tho is going to be costly but the Rampage III Formula isn't cheap either. If ASUS would add some OCing options to the ASUS KGPE-D16 would put a smile on my face. The ASUS KGPE-D16 would be a nice SLI motherboard for this test because its an X16 PCIE. I think it would be easier to get ASUS to fix OCing with this mobo than get Intel to make an enthusiast xeon.http://www.newegg.com/Product/Prod [...] 6813131643


    I have the setup you describe there with two Opteron 6128s sitting on an ASUS KGPE-D16. Note that I run Linux and some programs they run won't run on WINE. Here's roughly how it would stack up with these units being tested:

    - 3DMark Vantage: won't run on my system. I'm predicting it will come in under the stock Xeon E5620 since the stock E5620 is quite a bit behind the 4 GHz units, and the 6-core i7 970 is barely faster than the other quad-core units at 4 GHz.

    - Sandra Arithmetic & Multimedia: should beat any one of those there due to having 16 real cores.

    - Sandra Memory Bandwidth: eight channels of DDR3-1333 is more than twice as fast as their systems. The only question is to whether or not Sandra likes NUMA or not. If it does, then two Opteron 6128s would be much, much faster. If not, then it would be much lower. I'm downloading it right now and will update when I get to run it and tell you for certain.

    - COD2: lower score than the E5620 since this is a clock-speed-limited benchmark that does not scale beyond four cores.

    - Metro 2033: would probably be similar to the other units since this is not a CPU-limited benchmark.

    - DiRT2: would be slightly lower than the stock E5620 since we see no scaling advantage with the i7 970 and a small decrease in framerates with the stock E5620 versus the other chips.

    - Just Cause: not a CPU-bound game, just like Metro 2033

    - iTunes 10: this is a single-threaded benchmark and the Opteron 6128s would be considerably slower than the stock E5620.

    - Handbrake: should beat any of the chips there since this is well-threaded. I can't directly compare to their test since I don't have their same 1 GB VOB file to work on.

    - DivX: should be the same story as Handbrake.

    - XviD: not a very well-threaded program, and any of the chips there will beat two Opteron 6128s. XviD on Linux is poorly-threaded too.

    - MainConcept: same as Handbrake and DivX, with the two 6128s likely being much faster than the Xeons and Core i7s.

    - Photoshop: unknown. Photoshop loves Intel CPUs and is moderately-threaded, so I couldn't tell you if it would beat an i7 970.

    - 3dsMax: the Linux version of this app scales very well, like the Windows version tested here appears to. The 6128s should beat the chips here handily.

    - AVG: AVG isn't that well threaded beyond four cores, so the 6128s would not do all that stellar in this application.

    - WinRAR: same as AVG, it's not a very well threaded program.

    - 7-zip: is very threaded and two 6128s would be faster than any of the chips here.

    - Temperatures above ambient: impossible to directly compare, but my 6128s run about 35 C over ambient (52-57 C) full-load using Dynatron A6s heatsinks with roughly 2000 rpm on the fans. The Dynatron A6s are far smaller than the units used to cool the LGA1366 chips.

    - Power consumption: my system is obviously set up differently from theirs, but the CPU idle/load power consumption figures in my box are roughly in line with the 4 GHz chips and higher than the stock Xeon E5620. That is because I have two CPUs in the machine instead of just one like they do. A single Opteron 6128 has an idle power draw within a few watts of a single Xeon E5620 but consumes 20-30 W or so more power at full load.
  • 1 Hide
    Reynod , October 21, 2010 2:25 PM
    Enjoyed reading this.

    Thanks Chris.

  • 1 Hide
    Onus , October 21, 2010 3:23 PM
    After reading http://www.tomshardware.com/reviews/game-performance-bottleneck,2738.html, one of my conclusions was that massive overclocking was not a requirement for decent gaming. The comment was up-voted nine times; nothing in this article changes that opinion, where the GPU(s) become the issue long before the CPU. I might feel very differently about a render farm, but then where time is money and the cost is justified, I think most businesses would just buy the faster chips outright over risking stability and longevity by overclocking.
  • 0 Hide
    amnotanoobie , October 21, 2010 4:13 PM
    jtt283After reading http://www.tomshardware.com/review [...] ,2738.html, one of my conclusions was that massive overclocking was not a requirement for decent gaming.


    At least now we know that there is a Xeon alternative over the the i7-930. Power consumption and temperature would be your main concern for getting this chip rather than stock performance.
  • 2 Hide
    theoutbound , October 21, 2010 4:25 PM
    I'm still not sure if the Xeon would be any better value over an i7 given that you will also have to pay a premium for a board that supports the Xeon. Great article Tom's.
  • 0 Hide
    quovatis , October 21, 2010 4:52 PM
    The old opteron 939 CPUs were the best AMD chips ever made. Maybe Intel can do something similar.
Display more comments