Xeon E5620 ($384)
| Intel Xeon E5620 | |
|---|---|
| Cores/Threads | 4/8 |
| Stock Clock Rate | 2.4 GHz |
| Max. Turbo Boost Clock Rate | 2.66 GHz |
| Shared L3 Cache | 12 MB |
| Clock Multiplier/Max. Multiplier | 18/19 |
| QPI Speed | 5.86 GT/s |
| Lithography | 32 nm Westmere-EP |
| Max. TDP | 80 W |
| VID Voltage Range | .75-1.35 V |
| Memory Support | DDR3-800/1066 |
| Price | $384 |
A 211 MHz BCLK and 19x multiplier equals 4 GHz—a frequency that Intel’s 2.4 GHz Xeon E5620 has absolutely no trouble sustaining. A DDR3-1691 data rate falls well under our kit’s ceiling. So, I’m decidedly not worried about this configuration’s long-term prospects.
I know for a fact this chip could handle up to 4.2 GHz without breaking a sweat. The decision to use 4 GHz was based solely on the limitations of the other CPUs being tested.

Hitting those clocks only required marginal alterations. The CPU voltage was set to 1.35 V, its QPI/DRAM core voltage matched at 1.35 V, and the IOH voltage was nudged up ever so slightly to 1.258 V.
Core i7-970 ($879)
| Intel Core i7-970 | |
|---|---|
| Cores/Threads | 6/12 |
| Stock Clock Rate | 3.2 GHz |
| Max. Turbo Boost Clock Rate | 3.46 GHz |
| Shared L3 Cache | 12 MB |
| Clock Multiplier/Max. Multiplier | 24/25 |
| QPI Speed | 4.8 GT/s |
| Lithography | 32 nm Gulftown |
| Max. TDP | 130 W |
| VID Voltage Range | .80-1.375 V |
| Memory Support | DDR3-800/1066 |
| Price | $879 |
It’s a little unfair to include a processor that costs more than twice as much, but I wanted to show you what the cheapest desktop Core i7 based on the Gulftown design could do in comparison to the Xeon. This chip boasts six cores, but sports the same 12 MB of shared L3 cache.
Not surprisingly, the 3.2 GHz i7-970 overclocks well. Its 24x multiplier means you don’t have to push very hard for a 4 GHz overclock. Simply set the BCLK to a 160 MHz and increment the multiplier to 25x (the highest ratio this chip supports). We again use a 1.35 V CPU voltage setting, but leave the other voltages at their defaults, since we’re not pushing other clock rates very hard.
Greater than 4 GHz overclocks are also possible with this chip. But it’s hard to talk value when there’s an almost $900 CPU factored into the equation.
Core i7-930 ($284)
| Intel Core i7-930 | |
|---|---|
| Cores/Threads | 4/8 |
| Stock Clock Rate | 2.8 GHz |
| Max. Turbo Boost Clock Rate | 3.06 GHz |
| Shared L3 Cache | 8 MB |
| Clock Multiplier/Max. Multiplier | 21/22 |
| QPI Speed | 4.8 GT/s |
| Lithography | 45 nm Bloomfield |
| Max. TDP | 130 W |
| VID Voltage Range | .80-1.375 V |
| Memory Support | DDR3-800/1066 |
| Price | $284 |
Yes, yes, I know. The Core i7-950 costs just $10 more and gives you a higher multiplier setting. That’s not a very big deal here, though. We know Asus’ Rampage III Formula is good well beyond 200 MHz, and the Core i7-930’s highest 22x ratio means we only need a 182 MHz BCLK setting to reach 4 GHz.
Nevertheless, our Core i7-930 sample wasn’t as willing of an overclocker as I had hoped it would be, and neither was a retail version of the chip I bought on Newegg a couple of months ago. Tagging 4 GHz on this CPU meant riding the edge of stability, with a reduced 1.325 V CPU voltage setting that kept temperatures cresting an uncomfortable 95 degrees Celsius in Prime95. I wouldn’t be comfortable using this chip at 4 GHz for an extended period of time.
It’s also worth noting that this CPU inspired our choice of heatsink/fan. While the two 32 nm processors ran cool enough to work with our Thermalright Ultra 120 eXtreme, the i7-930 needed more in order to run stably at 4 GHz. So, I switched over to Noctua’s NH-D14.
- Meet The Xeon E5620
- Overclocking Intel’s Xeon
- The Contenders
- Test Setup And Benchmarks
- Benchmark Results: Synthetics
- Benchmark Results: Call Of Duty: Modern Warfare 2 (DX9)
- Benchmark Results: Metro 2033 (DX11)
- Benchmark Results: DiRT 2 (DX11)
- Benchmark Results: Just Cause 2 (DX11)
- Benchmark Results: Audio And Video Encoding
- Benchmark Results: Productivity
- Power And Heat
- Efficiency And Value Analysis
- Conclusion
Josh, if you have any ideas on testing, I'm all ears! We're currently working with Intel on server/workstation coverage (AMD has thus far been fairly unreceptive to seeing its Opteron processors tested).
Regards,
Chris
thank you for the review but your benchmarks prove that you were GPU-bottlenecked almost all time.
Letme explain: i.e. Metro 2033 or Just Cause 2... the Xenon running at 2.4 GHz provided the same FPS as when it ran at 4 GHz. That means your GPU is the bottleneck since the increase in CPU speed therefore the increase in the number of frames sent to the GPU for processing each second does not produce any visible output increase... so the GPU has too much to process already.
I also want to point out that enabling the AA and AF in CPU tests puts additional stress on the GPU therefore bottlenecking the system even more. It should be forbidden to do so... since your goal is to thest the CPU not the GPU.
Please try (and not only you, there is more than 1 article at Tom's) so try to reconsider the testing methodology, what bottleneck means and how can you detect it and so on...
Since the 480 bottlenecked most of the gaming results are useless except for seeing how many FPS does a GF480 provide in games, resolutions and with AA/AF. But that wasn't the point of the article.
LE: missed the text under the graphs... seems you are aware of the issue.
However, I'm sure everyone is aware of how sharply the price of Xeons rise above the lowest-of-the-low. I expect a Xeon capable of 4.5ghz (a good speed to aim for with a 32nm chip and good cooling), you would already be over the costs of purchasing a 970/980x/990x, especially considering how good a motherboard you would need to get - a Rampage III extreme is possibly one of the most expensive X58 boards on the market, offsetting most of the gains you'd get over a 45nm chip and a more wallet friendly board - such as the Gigabyte GA-X58A-UD3R.
By the way, concerning power efficiency the top pick is the L5640. While not a cost-effective processor a 60W TDP for its 6 cores is quite impressive.
Unfortunately Intel has used their regained near monopoly position to take away that option from their consumer chips. Until they see the light I've been force to use otherwise less powerful AMD CPUs on my main systems and recommend likewise to my clients and acquaintances.
But they're testing whether or not is necessary to use these kinds of cpus in gaming PC's, and for that, you do need to enable gamy setups.
http://www.newegg.com/Product/Product.aspx?Item=N82E16819105266
I know the motherboard tho is going to be costly but the Rampage III Formula isn't cheap either. If ASUS would add some OCing options to the ASUS KGPE-D16 would put a smile on my face. The ASUS KGPE-D16 would be a nice SLI motherboard for this test because its an X16 PCIE. I think it would be easier to get ASUS to fix OCing with this mobo than get Intel to make an enthusiast xeon.
http://www.newegg.com/Product/Product.aspx?Item=N82E16813131643
The point of the E5620 is to get a 32 nm LGA1366 chip for less than the i7 970 sells for. The only other 32 nm Xeon that sells for less than $870 is the Xeon E5630, which is merely 133 MHz faster than the E5620 but costs a couple hundred bucks more. All of the rest of the 32 nm Xeons are very expensive and more expensive than the i7 970 and i7 980X.
I have the setup you describe there with two Opteron 6128s sitting on an ASUS KGPE-D16. Note that I run Linux and some programs they run won't run on WINE. Here's roughly how it would stack up with these units being tested:
- 3DMark Vantage: won't run on my system. I'm predicting it will come in under the stock Xeon E5620 since the stock E5620 is quite a bit behind the 4 GHz units, and the 6-core i7 970 is barely faster than the other quad-core units at 4 GHz.
- Sandra Arithmetic & Multimedia: should beat any one of those there due to having 16 real cores.
- Sandra Memory Bandwidth: eight channels of DDR3-1333 is more than twice as fast as their systems. The only question is to whether or not Sandra likes NUMA or not. If it does, then two Opteron 6128s would be much, much faster. If not, then it would be much lower. I'm downloading it right now and will update when I get to run it and tell you for certain.
- COD2: lower score than the E5620 since this is a clock-speed-limited benchmark that does not scale beyond four cores.
- Metro 2033: would probably be similar to the other units since this is not a CPU-limited benchmark.
- DiRT2: would be slightly lower than the stock E5620 since we see no scaling advantage with the i7 970 and a small decrease in framerates with the stock E5620 versus the other chips.
- Just Cause: not a CPU-bound game, just like Metro 2033
- iTunes 10: this is a single-threaded benchmark and the Opteron 6128s would be considerably slower than the stock E5620.
- Handbrake: should beat any of the chips there since this is well-threaded. I can't directly compare to their test since I don't have their same 1 GB VOB file to work on.
- DivX: should be the same story as Handbrake.
- XviD: not a very well-threaded program, and any of the chips there will beat two Opteron 6128s. XviD on Linux is poorly-threaded too.
- MainConcept: same as Handbrake and DivX, with the two 6128s likely being much faster than the Xeons and Core i7s.
- Photoshop: unknown. Photoshop loves Intel CPUs and is moderately-threaded, so I couldn't tell you if it would beat an i7 970.
- 3dsMax: the Linux version of this app scales very well, like the Windows version tested here appears to. The 6128s should beat the chips here handily.
- AVG: AVG isn't that well threaded beyond four cores, so the 6128s would not do all that stellar in this application.
- WinRAR: same as AVG, it's not a very well threaded program.
- 7-zip: is very threaded and two 6128s would be faster than any of the chips here.
- Temperatures above ambient: impossible to directly compare, but my 6128s run about 35 C over ambient (52-57 C) full-load using Dynatron A6s heatsinks with roughly 2000 rpm on the fans. The Dynatron A6s are far smaller than the units used to cool the LGA1366 chips.
- Power consumption: my system is obviously set up differently from theirs, but the CPU idle/load power consumption figures in my box are roughly in line with the 4 GHz chips and higher than the stock Xeon E5620. That is because I have two CPUs in the machine instead of just one like they do. A single Opteron 6128 has an idle power draw within a few watts of a single Xeon E5620 but consumes 20-30 W or so more power at full load.
Thanks Chris.
At least now we know that there is a Xeon alternative over the the i7-930. Power consumption and temperature would be your main concern for getting this chip rather than stock performance.