To be honest, I didn’t tackle all of those benchmarks thinking that Intel’s Xeon E5620 was going to somehow magically outperform a six-core desktop chip. I also didn’t think 4 MB of additional L3 cache was going to put a huge lead over the 45 nm Bloomfield design, with its 8 MB repository.
Rather, I was hoping to see higher frequencies at lower operating temperatures, perhaps with a little power-savings sprinkled on top.

I took ambient temperature readings in between each result using an Extech TM200 thermometer. You can disregard those orange and red bars—the GPU remains fairly consistent at idle and load, regardless of the processor behind it. More interesting are the blue and green bars.
It comes as little surprise that the stock Xeon E5620 is an example of low thermal output thanks to conservative clocks and low operating voltage.
The overclocked Xeon runs significantly warmer due to a 1.6 GHz frequency increase and a higher fixed voltage.
Two additional cores mean that the Core i7-970 gets hotter still—about 10% warmer than the quad-core Xeon.
And an older manufacturing process translates to significantly hotter idle and load temperatures for the overclocked Core i7-930. And when you consider the ambient temp hovered around 32 degrees in my lab, adding 57 to that sticks the loaded Bloomfield core up around 90 degrees. That’s uncomfortably warm, long-term. In fact, I’d probably recommend dialing back to 3.73 GHz or so and dialing back voltage a bit in order to hopefully get a little more useful life out of the chip.

At idle, the overclocked Gulftown-based processors use the same amount of power. The stock Xeon is quite a bit more conservative with its consumption. And the Core i7-930 is only moderately higher than the other 4 GHz CPUs.
Load the CPUs down, though, and you get another story entirely. The stock Xeon E5620 is still fairly power-friendly. Overclocked and overvolted, consumption rises by nearly 100 W. Yet, the Xeon E5620 still uses 50 W less than the overclocked Core i7-970. And the Xeon uses roughly 60 W less than the other overclocked quad-core chip in this comparison, Intel’s Core i7-930.
Those results translate over to CPU+GPU power measurements, too. The overclocked Xeon E5620 uses 60 W less than the 4 GHz Core i7-930 setup. So, you’re getting roughly the same performance, significant power savings, and less heat output for a $100 price premium.
- Meet The Xeon E5620
- Overclocking Intel’s Xeon
- The Contenders
- Test Setup And Benchmarks
- Benchmark Results: Synthetics
- Benchmark Results: Call Of Duty: Modern Warfare 2 (DX9)
- Benchmark Results: Metro 2033 (DX11)
- Benchmark Results: DiRT 2 (DX11)
- Benchmark Results: Just Cause 2 (DX11)
- Benchmark Results: Audio And Video Encoding
- Benchmark Results: Productivity
- Power And Heat
- Efficiency And Value Analysis
- Conclusion
Josh, if you have any ideas on testing, I'm all ears! We're currently working with Intel on server/workstation coverage (AMD has thus far been fairly unreceptive to seeing its Opteron processors tested).
Regards,
Chris
thank you for the review but your benchmarks prove that you were GPU-bottlenecked almost all time.
Letme explain: i.e. Metro 2033 or Just Cause 2... the Xenon running at 2.4 GHz provided the same FPS as when it ran at 4 GHz. That means your GPU is the bottleneck since the increase in CPU speed therefore the increase in the number of frames sent to the GPU for processing each second does not produce any visible output increase... so the GPU has too much to process already.
I also want to point out that enabling the AA and AF in CPU tests puts additional stress on the GPU therefore bottlenecking the system even more. It should be forbidden to do so... since your goal is to thest the CPU not the GPU.
Please try (and not only you, there is more than 1 article at Tom's) so try to reconsider the testing methodology, what bottleneck means and how can you detect it and so on...
Since the 480 bottlenecked most of the gaming results are useless except for seeing how many FPS does a GF480 provide in games, resolutions and with AA/AF. But that wasn't the point of the article.
LE: missed the text under the graphs... seems you are aware of the issue.
However, I'm sure everyone is aware of how sharply the price of Xeons rise above the lowest-of-the-low. I expect a Xeon capable of 4.5ghz (a good speed to aim for with a 32nm chip and good cooling), you would already be over the costs of purchasing a 970/980x/990x, especially considering how good a motherboard you would need to get - a Rampage III extreme is possibly one of the most expensive X58 boards on the market, offsetting most of the gains you'd get over a 45nm chip and a more wallet friendly board - such as the Gigabyte GA-X58A-UD3R.
By the way, concerning power efficiency the top pick is the L5640. While not a cost-effective processor a 60W TDP for its 6 cores is quite impressive.
Unfortunately Intel has used their regained near monopoly position to take away that option from their consumer chips. Until they see the light I've been force to use otherwise less powerful AMD CPUs on my main systems and recommend likewise to my clients and acquaintances.
But they're testing whether or not is necessary to use these kinds of cpus in gaming PC's, and for that, you do need to enable gamy setups.
http://www.newegg.com/Product/Product.aspx?Item=N82E16819105266
I know the motherboard tho is going to be costly but the Rampage III Formula isn't cheap either. If ASUS would add some OCing options to the ASUS KGPE-D16 would put a smile on my face. The ASUS KGPE-D16 would be a nice SLI motherboard for this test because its an X16 PCIE. I think it would be easier to get ASUS to fix OCing with this mobo than get Intel to make an enthusiast xeon.
http://www.newegg.com/Product/Product.aspx?Item=N82E16813131643
The point of the E5620 is to get a 32 nm LGA1366 chip for less than the i7 970 sells for. The only other 32 nm Xeon that sells for less than $870 is the Xeon E5630, which is merely 133 MHz faster than the E5620 but costs a couple hundred bucks more. All of the rest of the 32 nm Xeons are very expensive and more expensive than the i7 970 and i7 980X.
I have the setup you describe there with two Opteron 6128s sitting on an ASUS KGPE-D16. Note that I run Linux and some programs they run won't run on WINE. Here's roughly how it would stack up with these units being tested:
- 3DMark Vantage: won't run on my system. I'm predicting it will come in under the stock Xeon E5620 since the stock E5620 is quite a bit behind the 4 GHz units, and the 6-core i7 970 is barely faster than the other quad-core units at 4 GHz.
- Sandra Arithmetic & Multimedia: should beat any one of those there due to having 16 real cores.
- Sandra Memory Bandwidth: eight channels of DDR3-1333 is more than twice as fast as their systems. The only question is to whether or not Sandra likes NUMA or not. If it does, then two Opteron 6128s would be much, much faster. If not, then it would be much lower. I'm downloading it right now and will update when I get to run it and tell you for certain.
- COD2: lower score than the E5620 since this is a clock-speed-limited benchmark that does not scale beyond four cores.
- Metro 2033: would probably be similar to the other units since this is not a CPU-limited benchmark.
- DiRT2: would be slightly lower than the stock E5620 since we see no scaling advantage with the i7 970 and a small decrease in framerates with the stock E5620 versus the other chips.
- Just Cause: not a CPU-bound game, just like Metro 2033
- iTunes 10: this is a single-threaded benchmark and the Opteron 6128s would be considerably slower than the stock E5620.
- Handbrake: should beat any of the chips there since this is well-threaded. I can't directly compare to their test since I don't have their same 1 GB VOB file to work on.
- DivX: should be the same story as Handbrake.
- XviD: not a very well-threaded program, and any of the chips there will beat two Opteron 6128s. XviD on Linux is poorly-threaded too.
- MainConcept: same as Handbrake and DivX, with the two 6128s likely being much faster than the Xeons and Core i7s.
- Photoshop: unknown. Photoshop loves Intel CPUs and is moderately-threaded, so I couldn't tell you if it would beat an i7 970.
- 3dsMax: the Linux version of this app scales very well, like the Windows version tested here appears to. The 6128s should beat the chips here handily.
- AVG: AVG isn't that well threaded beyond four cores, so the 6128s would not do all that stellar in this application.
- WinRAR: same as AVG, it's not a very well threaded program.
- 7-zip: is very threaded and two 6128s would be faster than any of the chips here.
- Temperatures above ambient: impossible to directly compare, but my 6128s run about 35 C over ambient (52-57 C) full-load using Dynatron A6s heatsinks with roughly 2000 rpm on the fans. The Dynatron A6s are far smaller than the units used to cool the LGA1366 chips.
- Power consumption: my system is obviously set up differently from theirs, but the CPU idle/load power consumption figures in my box are roughly in line with the 4 GHz chips and higher than the stock Xeon E5620. That is because I have two CPUs in the machine instead of just one like they do. A single Opteron 6128 has an idle power draw within a few watts of a single Xeon E5620 but consumes 20-30 W or so more power at full load.
Thanks Chris.
At least now we know that there is a Xeon alternative over the the i7-930. Power consumption and temperature would be your main concern for getting this chip rather than stock performance.