Once upon a time, Intel’s server-oriented CPUs were dramatically different from its desktop offerings.
Remember the Pentium Pro, with its P6 architecture that introduced speculative and out-of-order execution to Intel’s processor lineup? That part existed at a time when the company’s desktop chips still employed the Pentium’s P5 design. Then there was Pentium II Xeon, with its full-speed on-module L2 cache that was so large (physically) it required a special Slot 2 interface.
After that, though, the desktop and 1P server- and workstation-oriented chips started converging more and more. Fortunately, Intel played it smart and, for the most part, stopped charging the massive price premiums that Xeon-branded processors once commanded. Today, there's a $10-$20 premium on the single-socket Xeons.
Asus' C206-based P8B WS has a more business-oriented vibe
When Desktop Is Out Of Its Element
As a result, it never really surprises me when someone who should know better suggests saving a few bucks by building an entry-level server for a small business using a desktop processor. “It’s just branding, after all.” Shoot, I’ve even shown up to consulting assignments and found Celeron-based servers built by tier-one vendors. Call me old-school, but cutting corners just isn’t in the customer’s best interest.
To me, it doesn’t matter if you’ve had better luck with AMD or Intel. Emphatically, I’d insist that businesses shouldn’t use desktop platforms to drive their mission-critical machines. If it only means getting ECC memory support and a better-qualified motherboard in an Opteron- or Xeon-powered setup, spending the extra money is worth it.
That’s an easy idea to hammer home when you’re talking about dual-socket setups like the one I addressed in Intel Xeon 5600-Series: Can Your PC Use 24 Processors? After all, no amount of trying will get two LGA 1366-based Core i7s working together in a motherboard powered by Intel’s 5520 I/O Hub. Really, your only way to go there is Xeon. But Intel sells single-socket versions of its Xeon chips too.
Particularly when the 1P desktop and server chips center on the same architecture, it’s tempting for system builders to go the less-expensive desktop route, assuming performance will be the same anyway. Given the same-sized caches, number of cores, and clock rate, performance probably will turn out similar (if the Xeon-based machine doesn’t end up a little slower due to its ECC memory). So, Intel continues to face an uphill battle in convincing its customers that, even in single-socket configurations, Xeon is the way to go.
Segmenting The 1P Market
When you talk about single-CPU systems intended for businesses, there are two principal areas of interest: entry-level servers and workstations.
Intel's C200 chipsets look a lot like P67 and H67
The server-oriented folks are either looking at a light-duty workhorse in an SMB environment or adding nodes in a dense rack. That means power consumption and thermals are important variables. Moreover, management becomes a must-have. Even the orientation of PCI Express slots matters—servers don’t need 16-lane links. And when you’re talking about Lynnfield, Clarkdale, or Sandy Bridge, each with only 16 lanes of processor-based connectivity, enabling ample expansion means dividing them up smartly.
On the workstation side, you expect to run in a pedestal chassis with plenty of airflow. Heat and power consumption are generally non-issues, so the goal is to get as much performance out of the platform as possible. Managing a workstation remotely is less critical. And because discrete graphics cards are far more prevalent in workstations, the availability of at least one x16 slot is preferable.
Now, before today, Intel’s single-socket CPU portfolio consisted of the Xeon W3500 and W3600-series workstation-oriented processors, and the Xeon 3400-series chips. Foreign though they might sound, all three lineups center on architectures familiar from the desktop world. Xeon W3500 is based on the 130 W Bloomfield core for LGA 1366. Xeon W3600 employs the six-core Gulftown design, which of course means it maintains the same thermal ceiling and drops into the same interface. Most of the Xeon 3400-series offerings are 45 nm Lynnfield dies, though there are a couple of 32 nm Clarkdale-based models in there, too.
The Xeon E3-1200-series alters that landscape significantly, displacing Intel’s 3400- and W3500-series models with a number of Sandy Bridge-based options. The Xeon W3600s remain, delivering threaded performance that the E3s simply cannot match using four physical cores. Let’s break the stack down in more depth.
Eleven new processors populate the Xeon E3-1200 series. Five are server-specific, four are workstation-specific, and two are low-voltage models that won’t be sold at retail (they’re tray-only). Architecturally, all 11 chips are very similar. They center on the same Sandy Bridge design introduced on the desktop earlier this year.
That means they’re manufactured on a 32 nm process, employ up to four execution cores, and include as much as 8 MB of last-level cache. Hyper-Threading and Turbo Boost are enabled or disabled on a per-model basis, serving as differentiators. The same dual-channel memory controller is there, accommodating up to 32 GB of DDR3-1333. And there’s also an integrated PCI Express controller, plus the logic corresponding to Intel’s HD Graphics 3000 engine.
There are some notable differences between the desktop Core family and these new Xeons, though. To begin, the memory controller supports ECC-capable modules. That’s not even worth a footnote on a desktop platform, but it’s an important addition to servers and workstations tasked with money-making jobs. “Yeah, yeah, yeah,” you say. “I use desktop hardware at work all of the time and it’s just fine.” And so do I. But I also have more than a handful of painful memories when a story I was writing disappeared after a random blue-screen. Those are the situations ECC memory is intended to help prevent.
The Xeons also have more PCI Express connectivity. That’s right—here we all thought Sandy Bridge was limited to 16 lanes and three controllers. In fact, the Xeon implementation offers 20 lanes and four controllers. Sixteen makes sense on the desktop, where enthusiasts are most likely to monopolize them with a single GPU or split them with a pair of graphics cards. In the server space, however, you have 10 Gb Ethernet controllers, SAS cards, and Fibre Channel HBAs using x8 and x4 slots. An additional four lanes of PCIe come in useful.
Finally, there’s the issue of integrated graphics. Intel uses the same die across its Xeon E3 lineup. However, its retail server parts see that engine disabled entirely. One of its low-voltage offerings includes HD Graphics 2000. And the workstation SKUs come armed with HD Graphics P3000, which we’ll cover shortly.
| Base Clock | Max. Turbo Clock | L3 Cache | Cores / Threads | DDR3 Data Rate | Hyper-Threading | Turbo Boost | TDP (W) | |
|---|---|---|---|---|---|---|---|---|
| Xeon E3-1280 | 3.5 GHz | 3.9 GHz | 8 MB | 4/8 | 1333 / 1066 | Yes | Yes | 95 |
| Xeon E3-1275 | 3.4 GHz | 3.8 GHz | 8 MB | 4/8 | 1333 / 1066 | Yes | Yes | 95 |
| Xeon E3-1270 | 3.4 GHz | 3.8 GHz | 8 MB | 4/8 | 1333 / 1066 | Yes | Yes | 80 |
| Xeon E3-1260L | 2.4 GHz | 3.3 GHz | 8 MB | 4/8 | 1333 / 1066 | Yes | Yes | 45 |
| Xeon E3-1245 | 3.3 GHz | 3.7 GHz | 8 MB | 4/8 | 1333 / 1066 | Yes | Yes | 95 |
| Xeon E3-1240 | 3.3 GHz | 3.7 GHz | 8 MB | 4/8 | 1333 / 1066 | Yes | Yes | 80 |
| Xeon E3-1235 | 3.2 GHz | 3.6 GHz | 8 MB | 4/8 | 1333 / 1066 | Yes | Yes | 95 |
| Xeon E3-1230 | 3.2 GHz | 3.6 GHz | 8 MB | 4/8 | 1333 / 1066 | Yes | Yes | 80 |
| Xeon E3-1225 | 3.1 GHz | 3.4 GHz | 6 MB | 4/4 | 1333 / 1066 | No | Yes | 95 |
| Xeon E3-1220 | 3.1 GHz | 3.4 GHz | 8 MB | 4/4 | 1333 / 1066 | No | Yes | 80 |
| Xeon E3-1220L | 2.2 GHz | 3.4 GHz | 3 MB | 2/4 | 1333 / 1066 | Yes | Yes | 20 |
As you can see, there’s more choice in the Xeon E3 family than Intel’s second-gen Core i7, i5, and i3 lineups combined. And aside from one low-voltage tray model, they all include at least four physical cores. They’re also predominantly armed with Hyper-Threading and equipped with a full 8 MB of L3.
All but one of the retail server-oriented models sports an 80 W TDP, indicative of the tighter constraints on 1U rack-mounted machines. The workstation-class processors employ the same 95 W rating as Intel’s desktop processors. And the low-voltage parts are available at 45 and 20 W TDPs.
Up and down the lineup you see some of the same capabilities already discussed on the desktop: Turbo Boost, Demand-Based Switching (similar to SpeedStep), and AES-NI support. FlexMigration is a cool capability that the Xeons uniquely get, though, that allows them to operate in a virtualized environment alongside other, older virtualized servers. Generally, the risk there would be migrating a VM from one system to another (based on a dissimilar architecture), without the same virtualization acceleration features. FlexMigration basically recognizes each generation of hardware in your infrastructure and uses the lowest common denominator, preventing a compatibility clash. Of course, it’s not ideal to disable new features, but when it’s the difference between throwing away usable servers to avoid crashes, well…
The 11 CPUs are being complemented by three distinct chipsets (though distinct might be a bit strong; they’re all Cougar Point derivatives that look like H67 Express). Intel classifies them as C202 (the essential server part), C204 (the standard server model), and C206 (the advanced component more apropos in a workstation).
C202 is positioned as the solution for small businesses that might have otherwise purchased a server based on desktop hardware. Its feature set takes a hit in the interest of bringing price down, though. For instance, it includes six SATA 3Gb/s ports, but no 6 Gb/s connectivity. The platform exposes 16 of the processor’s 20 second-gen PCIe lanes, enabling three controllers (instead of the available four). But the platform controller hub itself includes an additional eight PCIe 2.0 lanes, so don’t expect to run out of slots. You get up to 12 USB 2.0 ports, a legacy PCI bus, and an integrated MAC, but no graphics or audio support. C202 also lacks provisions for management, and it doesn’t support Intel’s Intelligent Power Node Manager technology, either.
| Intel C202 | Intel C204 | Intel C206 | |
|---|---|---|---|
| CPU-Based PCIe 2.0 Lanes | 16 lanes / 3 controllers | 20 lanes / 4 controllers | |
| PCH-Based PCIe 2.0 Lanes | 8 | 8 | 8 |
| SATA 3Gb/s Ports | 6 | 4 | 4 |
| SATA 6Gb/s Ports | No | 2 | 2 |
| Rapid Storage Technology | Yes | Yes | Yes |
| USB 2.0 Ports | 12 | 12 | 14 |
| Management | No | No | AMT 7.0 / vPro |
| Node Manager | No | Yes | No |
| Legacy PCI | Yes | Yes | Yes |
| Integrated MAC | Yes | Yes | Yes |
| Integrated Graphics | No | No | Yes |
| Integrated Audio | No | No | Yes |
The C204 chipset, on the other hand, does incorporate Node Manager support, making it a viable option for data centers that want to spin their systems up and down based on the variable pricing of power (to illustrate one of the technology's uses). Intel’s “next step up” also sheds two of its 3 Gb/s SATA ports and converts them to 6 Gb/s connectors. Moreover, it enables all 20 of the CPU’s PCIe 2.0 lanes, which also makes it the better option for machines with a lot of add-in extras.
If you’re building a server, C204 is the highest-end chipset you’d want to consider. Intel’s own C204-equipped motherboard includes provisions for a PCIe-based SAS mezzanine card that doesn’t monopolize expansion slots. And it supports a little device called the Remote Management Module 4, enabling KVM redirection over TCP/IP. This lets you log into the system remotely and take control over it. Moreover, virtual media support allows you to redirect a locally-installed USB optical drive, for example, to install Server 2008 R2 on that remote machine from your own system. Pretty powerful stuff.
C206 basically looks like a fully-featured H67 Express, aside from the fact that it enables 20 lanes of PCIe 2.0 coming from the Xeon processor (and another eight emanating from itself). Intel enables the same four SATA 3Gb/s ports, two SATA 6Gb/s connectors, and 14 USB 2.0 ports. Finally, we have access to integrated graphics and HD Audio.
Our Test Board
While there seem to be plenty of C202- and C204-based boards from vendors like Intel, Supermicro, and Tyan, there’s only one C206-based platform, and it comes from Asus. Because we’re testing Intel’s Xeon E3-1275—a workstation-specific part, the decision to use Asus’ P8B WS was an easy one.

Naturally, the board is armed with an LGA 1155 interface able to take any of the four workstation-oriented Xeon E3 CPUs. Its quartet of memory slots accommodates up to 32 GB of DDR3-1333 memory, with or without ECC capabilities. Asus exposes Intel’s HD Graphics P3000 through one output option: a single-link DVI connector with a maximum resolution of 1920x1200. I consider that a disappointing limitation for a workstation-class board, but it’s an unfortunate artifact of Intel’s graphics implementation, not a poor design choice on Asus’ part.
The P8B WS includes four 16-lane PCI Express 2.0 slots able to accommodate as many double-slot graphics cards. But the board doesn’t support SLI at all; it’s limited to four-way CrossFireX. Given the platform’s 20 CPU-based lanes, the x16 slots are configurable as x16/x4/x4 or x8/x8/x4/x4, borrowing connectivity from the PCH. Providing you’re not plugging up all of the board’s expansion with graphics cards, you also get access to a x1 PCIe slot and a legacy PCI slot.

Dual gigabit Ethernet ports, an add-on USB 3.0 controller, twin IEEE 1394 ports, and eight-channel audio round out Asus hardware-oriented feature set.
I gave Intel’s approach to integrated graphics on the desktop a real smack-down in Intel’s Second-Gen Core CPUs: The Sandy Bridge Review (specifically on page seven). The fact that the K-series SKUs come with HD Graphics 3000 was puzzling to me. Nobody spending extra cash on an unlocked processor cares if it includes integrated graphics. Meanwhile, the locked Core i3, i5, and i7 models are all handicapped with HD Graphics 2000, limited to six execution units (rather than 12).
Fortunately, the company’s workstation group doesn’t follow suit. All four Xeon E3-12x5s employ a form of the GT2 solution differentiated with a P, which turns into HD Graphics P3000. Hardware-wise, there is no difference between HD Graphics 3000 and P3000. So, why bother with the prefix? Intel says it’s making special changes to its graphics driver to give the P3000 solution optimized performance in workstations apps.
AMD and Nvidia do something similar. Both companies focus on a unified graphics architecture that serves desktop, mobile, and professional markets. Then they tweak the hardware and software for each application. The FirePro and Quadro drivers are what make those workstation solutions unique. Now Intel is dedicating a driver team to doing the same thing.
As a result, Intel’s representatives say that a workstation armed with a Xeon E3-12x5 processor should have the chops to contend with an entry-level discrete graphics card, like Nvidia’s $150 Quadro FX 580. If that’s true, Intel’s integrated graphics could be an enormous value, helping mitigate the higher cost of true business-class hardware.
HD Graphics P3000 enables Advanced Settings, though add-in cards offer even more options here.
Here’s our main concern: AMD and Nvidia have a lot of experience here. They know that it’s important to be transparent when it comes to the apps that get accelerated and the software for which the graphics hardware is validated. Both companies maintain explicit lists of ISV partners. If you’re a professional working in, say, Maya, you can hit up Nvidia’s site or AMD’s site and download the driver approved by Autodesk.
In comparison, this is Intel’s first time at the rodeo. It doesn’t host a list on its site (that I can find) with the optimized apps. And the most specificity I could get out of the company was that it had optimizations for Autodesk AutoCAD 2011, Bentley MicroStation, and Adobe Photoshop. Apparently, there are other titles being worked on, but none that it was willing to call out for our story.
Without a solid list of validations and optimizations, it’s impossible for a professional to know whether HD Graphics P3000 offers anything beyond Intel’s desktop solution. And as you’ll see in the benchmarks, the Core i7 and Xeon hardware performs identically in any title not explicitly targeted by Intel’s driver team.
| Bentley Microstation Benchmark | |||
|---|---|---|---|
| Intel HD Graphics P3000 | Intel HD Graphics 3000 | Nvidia Quadro FX 580 | |
| Drawing Test Name | |||
| B-Spline Surfaces | 97.3 | 96.6 | 103.6 |
| Filled Hidden Line | 26.1 | 25.4 | 121.2 |
| Geometric Primitives | 56.5 | 57.2 | 88.7 |
| Geometric Primitives (Anti-Aliased) | 48.2 | 48.6 | 52.9 |
| Pattern Fill | 45.9 | 42.8 | 75.7 |
| Raster | 19.1 | 19.0 | 44.4 |
| Shaded Mesh | 36.8 | 23.9 | 36.3 |
| Text | 102.7 | 103.0 | 111.3 |
| Shadows Comparison | |||
| Shadows Disabled | 321.8 | 323.1 | 977.7 |
| Shadows Enabled | 86.3 | 85.9 | 172.4 |
| Buffer Tests | |||
| Copy Buffer | 348.4 | 350.8 | 1607.8 |
| Element Dynamics | 8760.5 | 8639.3 | 14 812.4 |
| Walkthrough Diagnostic | |||
| Occlusion Testing Disabled | 24.2 | 17.0 | 38.9 |
| Occlusion Testing Enabled | 29.9 | 17.8 | 34.9 |
Here’s the Bentley Microstation benchmark, tested on three configurations. As you can see, there are only a handful of subtests where the P3000 implementation outshines the desktop-class HD Graphics 3000.
Until Intel starts taking cues from its competition in the workstation graphics space, I don’t see professionals taking HD Graphics P3000 seriously. The same folks who spend extra on a system with ECC memory want assurance that saving $150 on an add-in graphics card won’t end up costing thousands in lost work down the road.
| Test Hardware | |
|---|---|
| Processors | Intel Xeon E3-1275 (Sandy Bridge) 3.4 GHz, LGA 1155, 8 MB Shared L3, Hyper-Threading enabled, Power-savings enabled |
| Intel Core i7-2600K (Sandy Bridge) 3.4 GHz, LGA 1155, 8 MB Shared L3, Hyper-Threading enabled, Power-savings enabled | |
| Motherboards | Asus P8B WS (LGA 1155) Intel C206, BIOS 0401 |
| Memory | Crucial 8 GB (2 x 4 GB) DDR3-1333 ECC Unbuffered, CT51264BA1339.16FD |
| Hard Drive | Samsung MZ-5PA2560/000 256 GB SATA 3 Gb/s SSD |
| Graphics | Intel HD Graphics P3000 |
| Intel HD Graphics 3000 | |
| Nvidia Quadro FX 580 | |
| Power Supply | Seasonic X760 760 W 80 PLUS Gold PSU |
| System Software And Drivers | |
| Operating System | Windows 7 Ultimate 64-bit |
| DirectX | DirectX 11 |
| Graphics Driver | Intel Driver: 8.15.10.2345 |
| Nvidia Driver: 270.61 | |
Benchmarks and Settings | |
|---|---|
Video Encoding | |
MainConcept 2.0 | Version: 2.0.0.1555 |
| HandBrake 0.9.4 | Version 0.9.4, convert first .vob file from The Last Samurai to .mp4, High Profile |
Applications | |
| WinRAR | Version 4.0 RAR, Syntax "winrar a -r -m3", Benchmark: 2010-THG-Workload |
| Visual Studio 2010 | Miranda IM Compile |
| Blender | Version: 2.54 beta Syntax blender -b thg.blend -f 1, Resolution: 1920x1080, Anti-Aliasing: 8x, Render: THG.blend frame 1 |
| ABBYY FineReader | Version: 10 Professional Build (10.0.102.82) Read PDF save to Doc, Source: Political Economy (J. Broadhurst 1842) 111 Pages |
Adobe After Effects | CS5 10.0.2; Custom Workload, SD project with three picture-in-picture frames, source video at 720p |
Adobe Photoshop | CS5 12.0.3; Custom Workload, Radial Blur, Shape Blur, Median, Polar Coordinates filters |
Adobe Premiere Pro | CS5 5.0.3; Paladin Workload |
| e-on Software Vue 8 PLE | 1920x1080 landscape render, Global Illumination enabled |
| Euler3D | CFD simulation over NACA 445.6 aeroelastic test wing at Mach .5 |
| Autodesk MatchMover 2011 | Custom workload, 720p camera footage tracked in 3D space |
Synthetic Benchmarks and Settings | |
| SPECapc 3ds Max 9 | Default Run, Hardware Shaders, Graphics, and CPU Render scores |
SPECapc LightWave 9.6 | LightWave 3D Discovery Edition, Render and MT benchmark scores |
SPECviewperf 10 | Multi-threaded x64 (four-threads); Workloads: 3ds Max, CATIA, Maya, Pro/E, SolidWorks, Teamcenter Visualization Mockup |
| Cinebench 11.5 | CPU and GPU tests, Built-in benchmark |

First of all, it’s important to note that we’re running viewperf 10 because the newer SPECviewperf 11 isn't properly recognized by the HD Graphics P3000/3000 engine, and several of the benchmark's sub-tests consequently fail to yield scores.
Even the results in viewperf 10, however, show the HD Graphics P3000 and 3000 parts performing similarly. Intel’s contention is that it’s optimizing for real-world scenarios where customers will benefit from improved performance, rather than benchmarks. So, we’ll have to see if any of our other graphics-oriented metrics demonstrate the workstation-specific part distinguishing itself.
Meanwhile, Nvidia’s Quadro FX 580 puts down significantly better numbers in all six sub-tests, absolutely warranting an upgrade to the $150 card in five of them. Intel’s processor graphics hold up admirably well in Maya, even though this isn’t one of the apps that the workstation team says it has optimized for yet.

The latest version of LightWave (10) is a significantly different piece of software than LightWave 3D 9.6, the latest build we can test using SPEC’s canned workload. Much of this has to do with input from Rob Powers, the guy who served as the animation technical director for Avatar and is now vice president of 3D development at NewTek. We’ve discussed custom benchmarking tools with Rob in the past, but don’t have anything to show for those talks as of yet. So, we’re still stuck using this much older version of LightWave to test modern graphics hardware.
The OpenGL render test actually shows our three configurations performing somewhat similarly. It’s in the Interactive and Multi-Task benchmarks where Nvidia’s Quadro FX 580 distances itself from Intel’s HD Graphics P3000/3000 products.
That a $150 discrete solution outperforms integrated graphics really isn’t much of a shocker. What’s more interesting is that HD Graphics P3000 and HD Graphics 3000 return similar results. This seems to be yet another app not yet differentiated by Intel’s workstation-specific part.

Given the similar clock rate and Turbo Boost profile of the Core i7-2600K and Xeon E3-1275, similar CPU Render scores in 3ds Max 9 are expected. The fact that Intel’s HD Graphics implementations outmaneuver the Quadro FX 580 in the Hardware Shaders test, however, is a little more surprising. In the final test, graphics, Nvidia’s card reclaims its position on top.

The sub-category chart breaks the results down further. In just about every category, the Nvidia card outperforms Intel’s best effort (except for the Hardware Shaders test).
There’s actually a bit of good news for Intel, though. While there doesn’t really seem to be any benefit to using HD Graphics P3000 over HD Graphics 3000, there’s also a case for using the processor graphics included for “free,” rather than spending an extra $150 on discrete graphics. Nvidia’s card simply doesn’t have that big of an advantage.

Our Premiere Pro CS5 test is based on the very processing-heavy Paladin trailer. We already know this project is capable of taking advantage of Nvidia’s CUDA technology. And although it’s possible to manually force CUDA support for cards otherwise not recognized by Premiere Pro CS5 (the Quadro FX 580 isn’t by default), there’s a minimum memory requirement of 750 MB, meaning this 512 MB board can’t help us out.
As a result, all three configurations run in software mode, meaning their respective CPUs—operating at the same frequencies—are on even footing. Though the benchmark does exhibit some variance, we’re confident in calling this one a three-way tie. It’d take a Nvidia card with more memory to make a dent in the render time.

After Effects is similarly processor-limited. The graphics products in our benchmark platforms don’t make a difference in the outcome.

The same holds true for our Photoshop CS5 benchmark, which is optimized to take advantage of threaded processors, but doesn’t benefit from more powerful GPUs.
Fortunately, if your primary reason for buying a workstation is either to work in After Effects or Photoshop, it’s good to know that spending money on a faster graphics card isn’t as helpful as a more powerful CPU or a better storage subsystem.

Most of the transcoding apps in our benchmark suite are pure measures of CPU performance, optimized for threading though they may be. MainConcept 2.0 consequently fails to give us anything interesting to look at.
Now, you’re probably wondering why we didn’t just throw in some Quick Sync-enabled software to put up against Nvidia’s CUDA-accelerated card. Unfortunately, although Intel claims that its Xeon E3 parts feature full Quick Sync support, we couldn’t get the feature working. We gave the latest version of CyberLink’s MediaEspresso 6.5 a shot and discovered that only hardware-accelerated decode works. The encode functionality isn’t recognized at all.
The result isn’t terrible—our transcode job finishes in 38 second instead of the 22 seconds we saw in our Sandy Bridge review (and that’s still better than CUDA- or APP-accelerated GPU-based alternatives). However, it’s fair to say that if you’re going to be doing transcode work, the Core i7-2600K’s desktop-oriented feature set is currently better-supported than Xeon’s more professional list of capabilities.

HandBrake is similarly CPU-constrained, and shows all three configurations performing similarly.

Per Wikipedia: “In cinematography, match moving is a visual-effects technique that allows the insertion of computer graphics into live-action footage with correct position, scale, orientation, and motion relative to the photographed objects in the shot. The term is used loosely to refer to several different ways of extracting motion information from a motion picture, particularly camera movement. Match moving is related to rotoscoping and photogrammetry. It is sometimes referred to as motion tracking.”
The first step in match moving is identifying and tracking features—and that’s what our MatchMover 2011 benchmark does, using custom footage taken by Jon Carroll on Hollywood’s Walk of Fame.
We’ve established that this app is lightly threaded, if at all. It really seems to like the Sandy Bridge architecture, though. Compare the above graph to the one in Intel Xeon 5600-Series: Can Your PC Use 24 Processors?, where a 2P Xeon 5600-based setup takes more than seven minutes to complete the same task.

Rendering in Blender is another processor-oriented task, and another benchmark that doesn’t care what sort of graphics card you’re running. That’s good news if you don’t anticipate running other 3D-oriented apps and can get away with HD Graphics P3000 for “free.”

The processor-oriented component of Cinebench is even across the board, just as we’d expect. But the graphics test favors Intel’s Xeon E3-1275. Intel’s Core i7-2600K takes second place. And the Quadro FX 580 falls to last place. Strange? A little. But that’s still an impressive finish for Intel’s Sandy Bridge-based processors in this OpenGL-based benchmark.

Vue is used to create, animate, and render 3D environments, so it’s not surprising that this professional app is well-optimized for multi-core, multi-threaded workstations. Our dual-socket Xeon 5600 system finishes this test in under 10 minutes, and a Core i7-980X can wrap it up in 18. These Sandy Bridge-based configs need around 24 minutes to get the job done. And it doesn’t matter which graphics solution you use, either.

FineReader 10 is an OCR app—there’s no real reason that one of these setups should be any faster than the others. And yet, the Core i7-based machines turn in the better performance.

For the same reason we wouldn’t expect much differentiation in ABBYY FineReader, there’s no architectural explanation for the Xeon’s one-second advantage in WinRAR. This is a threaded benchmark that we’d expect to run just as well on Intel’s Core i7-2600K as the Xeon E3-1275.

You’ve been asking for a compile test, so we built one using Visual Studio 2010. I’m throwing it in here since it’s relevant in a workstation context. But if course it’s not going to show any difference between these three 3.4 GHz setups capable of running at up to 3.8 GHz each.
Measuring power in a story like this is never going to be an exercise in precision. If you run a processor-limited test, graphics get ignored. A graphics-heavy benchmark might overemphasize the GPU at the expense of the rest of the platform.
Logging the SPECapc 3ds Max 9 benchmark seemed like a good compromise, though. Not only does the test have its own CPU render component to complement the graphics tests, but it also seemed to show Intel’s HD Graphics P3000/3000 engine in a fairly balanced light.

What the SPECapc score doesn’t tell you is how much faster the Quadro FX 580 wraps this test up. The entire thing takes just over 15 minutes with discrete graphics. It takes more than 25 minutes on either the Xeon E3 or Core i7 platforms using Intel’s processor graphics.
If you flatten each of those three lines, you discover that both systems with Intel CPUs and Intel graphics average about 84 W over the course of the run. Dropping the add-in card increases average system power use to nearly 96 W (that’s not bad, by the way, though it suggests only moderate utilization of the GPU).
Unfortunately, those averages mean very little, aside from the fact that adding a graphics cards increases power use (duh). You have to multiply them by the fraction of an hour it takes the workload to complete. Do that and the story turns around. Because the Intel systems take so long to finish their job, they use more than 35 watt-hours. Adding the discrete card, however, drops that figure to 25 watt-hours.
Now, I’m not trying to say a workstation with a Xeon and a Quadro card in it is going to reflect that power graph for everyone. Many, many professional workloads are exclusively processor-bound. Dumping an add-in board is only going to increase power consumption. Log a run through Premiere Pro CS5, for example, and the Nvidia chip will spin idle, unused.
So really, our figures are most useful for, one, showing that the Xeon E3-1275 has the exact same power profile as a Core i7-2600K and two, demonstrating that although an all-integrated solution is more power-friendly at any given point in time, it can also drag its feet to the point where adding discrete graphics would actually be more power-friendly in a given workload.
What’s HD Graphics P3000 Worth?
Many of the benchmarks we ran, which you’re already accustomed to seeing in our regular processor reviews, are utterly anti-climactic. And while that’d seem like bad news for a company touting the extra graphics-oriented optimizations inherent to its workstation-specific Xeon E3-1200-series CPUs, it’s actually not.
For any workload that doesn’t require a potent graphics processor, HD Graphics P3000 performs just fine, enabling similar performance as a setup with discrete graphics. As you saw, that includes a majority of our benchmarks. Not having to drop a discrete card into your entry-level workstation frees up money. If I was doing a lot of Visual Studio or rendering work, I’d sink that savings into an SSD and use the Xeon’s value-added HD Graphics engine. Why not? It's there, you've already paid for it, and it's modestly-capable.
There are even a handful of situations where driver-oriented tweaks help Intel’s HD Graphics P3000 compete with entry-level professional cards like the Quadro FX 580. Make no mistake, though—the number of apps for which Intel is currently optimizing is small. We couldn’t get an official list of titles that run better on Xeon’s P3000 (rather than Core i7’s 3000) today, much less a roadmap for apps the company plans to optimize for in the future. Without this critical information, it’s impossible for professionals to make an informed decision with regard to whether HD Graphics is good for them, or if they’ll need an add-in board for official validation. For that reason, if your money-making app depends on 3D performance, don’t even chance it—buy the discrete GPU.
Nvidia’s been in the professional graphics space a long time, and its long list of validated drivers is going to be hard for Intel to match. Similarly, AMD makes it very easy to see if your app of choice is certified on its professional graphics products. Intel needs more of this sort of transparency if it hopes to win over customers on the merits of its graphics engine.
How About The CPU?
Now, there are features you explicitly give up when it comes to adopting Xeon over the desktop-class Core i7. Overclocking, for example, is out of the question. While the Core i7-2600K at $317 gives you an unlocked multiplier ratio (easily able to hit 4.5+ GHz), the Xeon E3-1275 at $339 is locked. Additionally, while Intel claims Quick Sync is enabled on the Xeon E3 lineup, none of the apps we used to test Quick Sync on the desktop seem to recognize it on the Xeon. Perhaps that’s just a software support issue.
On the other hand, though, the Xeon enables ECC memory support. It lives on a platform that enables additional PCI Express connectivity. It connects to a chipset that offers RAID support for Linux-based operating systems. In other words, there are real business-specific reasons to spend an extra $20 on a Xeon running at the same speed as a Core i7. And for the folks who need those specific differentiators, but were previously priced out of the workstation market, Xeon E3 does make sense.
And we only looked at the flagship Xeon E3-1275, too. If you slide down the stack to a processor like the Xeon E3-1225, which includes four cores, 6 MB of last-level cache, and the same HD Graphics P3000 engine for $194, you end up comparing to Core i5-2400. The desktop chip has the same basic specs for $184, but it’s limited to HD Graphics 2000. The Xeon’s better graphics and more professional feature set make it the obvious choice for an entry-level workstation without the need for a discrete GPU.
Where Are Thou, Motherboards?
It’s interesting that Intel chose not to build a motherboard on its own C206 chipset, opting instead to let Asus take the reins for its launch. With that said, the P8B WS is a solid workstation-class platform. Of course, it looks a lot like so many of the other P67 boards we’ve seen (despite its onboard graphics support), which is expected given the similarities between both Cougar Point-based chipsets. Given Asus’ thorough job, the lack of C206-based workstation motherboards from any other vendor is understandable. Server-oriented C204 and C202 platforms seem to be much more plentiful.

