Content creation applications generally provide very consistent results that are interesting from a number of angles. First, they are clearly a standard use case for workstation processors. Second, they tend to be sensitive to processor frequency, IPC, and memory data rate changes. Third, the benchmarks tend to scale well with many cores even across multiple interfaces. Those attributes make these types of benchmarks very attractive.
3ds Max

Our 3ds Max workload demonstrates a theme that is repeated often in this review: the Xeon E3-1275 v3 is the fastest, and then there are consistent performance increases over the years.
Blender

Here we appear to be measuring the effect of a 100 MHz speed bump and faster memory as we glide from the Xeon E3-1275 to the -1275 v2.
Cinebench

Cinebench demonstrates a steady speed-up with each successive architecture. This benchmark also lets us dig into the impact on single-threaded and parallelized performance. We like Cinebench because it's based on a real-world engine, and because it leverages as many cores as we throw at it (even from multi-processor Xeon and Opteron configurations).
- The Intel Xeon E3-1200 Series' Evolution
- Three Generations Of Xeon E3-1275 CPUs
- Supermicro SuperWorkstation 5037A-iL: Our LGA 1155 Test Platform
- Supermicro SuperWorkstation 5038A-iL: Our LGA 1150 Test Platform
- Hardware Setup And Benchmarks
- Results: Synthetics
- Results: Adobe CS6
- Results: Content Creation
- Results: Productivity
- Results: Compression Apps
- Results: Media Encoding
- Power Consumption And Noise
- Xeon E3-1275 v3: A Lot Like Haswell On The Desktop, With Pro Features
I think 'meh' will be the overwhelming majority consensus on this chip
I think 'meh' will be the overwhelming majority consensus on this chip
That's kind of the way I see it. I don't think the Xeon is anything to write home about like some people on this board do, but the average user and/or gamer won't notice a lick of difference between an i5, i7, and low end Xeon. I would only recommend them in instances of things like Photoshop and heavy duty CS5 usage, but even then an i7-4770K or i7-4820K would be a better choice.
The only real threat from ARM is to profit margins: once ARM catches up, it may become more difficult for Intel to maintain the large premiums they currently command across most markets.
In addition, the chipsets and platforms used with Xeons are more stringently held to industry standards, making them known quantities for device makers. Enterprise raid controllers are frequently unsupported on a standard desktop system with a Core i7 4770 and Z87 chipset, while they would be supported on a Xeon E3-1275v3 with a C226 chipset, even though the actual silicon design is exactly the same between the two.
There really isn't any difference in the silicon itself between a Haswell Core i7 and a Haswell Xeon E3, so there won't be a performance difference. The difference is in the stability of equipment surrounding each.
the1kingbob - I have AMD Opteron 3000, 4000 and 6000 series chips in the lab and use them daily. The Operton 3300 series would be the closest platform but the performance is significantly behind the Haswell Xeon E3-1275 V3. Those Opterons also do not have integrated GPUs like the E3-12x5 V1 V2 and V3 chips so are hard to compare.
And I never said they were - at least for now. But ARM might get there if they manage to sustain their current improvement pace for a few years while AMD and Intel remain stuck for most intents and purposes.
Yes, Intel released some cut-down x86 chips to compete with ARM for low-power market segments but this is only a temporary fix since Intel will likely add much of that stuff back in to keep up with ARM as ARM performance ramps up. The interesting part in 3-5 years will be where ARM will go once they hit the same steep diminishing return slope AMD and Intel are on.
Also, I don't understand people who actually buy prebuilt Dell/HP/etc servers for small-scale stuff: they use poor quality hardware (Seagate drives and not WD, for instance, some no-name RAM brands, etc.), warranty is short, power supply and case cooling is freaking noisy and inefficient (we have a couple machines here - one is an Intel ATX "server" enclosure and one is a Dell blade server - both run louder and hotter on idle than my gaming rig with 11 fans on full load)... And they cost twice as much as a custom-built rig (even with "server/workstation" grade hardware!) with same or better specs. What's the point? *shrugs*
Another annoyance is how many threads on the Internet simply slap a "server grade" label on Xeons, chipsets like C226, Intel server boards, etc etc and say that they're better for everything "professional" than quality desktop hardware just because "it is server grade hardware". I am really sick of hearing this. Tom's, can you please do a solid article comparing IB/Haswell Core i5/i7 (non overclocked, because you can't OC Xeons, at least they are not meant for it) with E3 v2/v3 and E5/E5 v2 Xeons? (Maybe E7 too, though they will shred i5/i7 in all professional tasks due to sheer amount of cores and threads, despite using an outdated Nehalem architecture)
It depends on what your server does. For mission-critical stuff, a single undetected error at the wrong place over the system's entire lifespan can be several times more expensive than the extra cost of ECC.
I had a DIMM with a single bad bit on it that memtest86 did not catch on the first pass. Wasted a couple of days trying to figure out why my system became so unstable after a few days before I decided to let memtest run overnight again to find a dozen errors all on the same bit at the exact same address in the morning. Then I spent two days trying to fix all the OS files that got damaged, gave up on that and ended up spending two days re-installing the OS and all my programs. Even at an entry-level wage of $15/hour, all my wasted time due to that single bad bit would have cost me over $400.
The funny thing is that Intel is now facing the same problem. Today they have the 4000 dollar CPUs while ARM is fast enough.
And we have the 64bit issue.
X86 uses EXTENSIONS.
ARM/RISC uses complete 64bit instruction sets.
Thats why 64bit on Windows is 3% slower (and now all IT "experts" believe that 64bit have no merit exempt for more memory. They seem to forget that real computers where 64bit in 1990 and there where no 4 gig memory back them, It was for PERFORMANCE RISC started to do 64bit)
Intel can/will not compete with ARM. ARM gets its 10 cent licensing fee per CPU. Intel in other hand is used to make 80% profit on 400-4000 dollar CPUs. Thats why Intel can blow away billions in pumping out a new CPU generation/revision each year. Let us all admit: Evey update since Sandy Bridge have only given us less then 20% more performance. Imagine any other company spending 4 designs to only get 20% + 2 node shrinks!
"ARM is to far behind".
ARM outsells Intel by 100-1 today. Thats a hint.
Take an A7 SoC and compare that to intel per mhz and you will find out that ARM is actually faster. The only thing making Intel faster today is that they clock higher and have more cores because they can use 45-150watt.
Who knows how fast an ARM chip would be if they used 150 watt.
We should all be happy with X86 dying. Real 64bit is huge!
+
for customers its better that a SoC cost 25 dollar then Intel who needs to use a huge die area just to CISC = Intel can never compete on price.
Intel will be like RISC today.
Highend servers and Highend PCs.
The rest will be RISC. Like it was 15 years ago and it should be. BTW. The cloud existed 15 years ago + on Unix. Its fun how the dark ages of Windows turned back the IT industry 20 years. The question is who will save us again now when Steve Jobs is dead.
(And yes: It was Steve who brought us working Unix on desktop = why we have smartphones/tablets today. And tablets are now outselling PCs and Smartphones have been outselling PCs 3-1. All running Unix/Linux beside MSFT 3% retards)
For the same price one can get a Xeon with 8 threads and full virtualization support or an overclockable i5 with 4 threads (whose warranty gets voided with overclocking...).
AMD doesn't do this with its unlocked CPUs or APUs and even their multiplier locked chips have all the features enabled. That's appreciated, because it lets the consumer decided what price/performance/features THEY want.
Intel doesn't care about anything but money though. That's why there hasn't be a 4 core Intel CPU for less than $185, ever. That's why Intel disables useful things to create artificial market segments. I'm all for companies making money, but price fixing and purposely limiting products to entice people to spend more money than they really should have to is not OK.
Thankfully anyone who wants to use a lot of VMs can get full hardware support from AMD for as little as $160 (FX-8320) and have quite acceptable performance.
While it's true that ECC would be helpful in such a situation, don't forget that it would not be able to combat a faulty RAM module for long - at some point you would still have to replace the module, the sooner the better.
To be honest, that's not a big deal. Overclockers generally don't run VMs, professionals who run VMs are usually CBA to OC. There might be some stability issues with VT-d if you OC, causing them to leave it out (iSpeculate)... as for VT-x, it IS present in unlocked Intel CPUs, even as far back as i7-2600K.
I hope i am not wrong.
That's a completely unfounded and unhelpful generalization.
Intel has had VT-x since the Pentium 4. However, VT-x is only half of the virtulization technology available. Unlike Intel, AMD doesn't hamstring its virtualization support by breaking it into two pieces and withholding half of it to create artificial, price inflated markets. With AMD it's either there or it's not, but it's been there in its entirety on almost every chip since 2006.
Sorry, but your sweeping generalizations and placations do not a justification make.
Also, I don't understand people who actually buy prebuilt Dell/HP/etc servers for small-scale stuff: they use poor quality hardware (Seagate drives and not WD, for instance, some no-name RAM brands, etc.), warranty is short, power supply and case cooling is freaking noisy and inefficient (we have a couple machines here - one is an Intel ATX "server" enclosure and one is a Dell blade server - both run louder and hotter on idle than my gaming rig with 11 fans on full load)... And they cost twice as much as a custom-built rig (even with "server/workstation" grade hardware!) with same or better specs. What's the point? *shrugs*
Another annoyance is how many threads on the Internet simply slap a "server grade" label on Xeons, chipsets like C226, Intel server boards, etc etc and say that they're better for everything "professional" than quality desktop hardware just because "it is server grade hardware". I am really sick of hearing this. Tom's, can you please do a solid article comparing IB/Haswell Core i5/i7 (non overclocked, because you can't OC Xeons, at least they are not meant for it) with E3 v2/v3 and E5/E5 v2 Xeons? (Maybe E7 too, though they will shred i5/i7 in all professional tasks due to sheer amount of cores and threads, despite using an outdated Nehalem architecture)
You clearly have not worked with server grade hardware. I have been for the last 3 years, and I can tell you, there is a major difference.
First off, ECC memory is not overrated. It makes a big difference. You don't want your mail server crashing every week due to memory errors, or files getting corrupted on your storage server.
Second, I have dealt with cheap, overpriced servers (Supermicro, HP DL180, generic OEM parts) recently, and I have dealt with inexpensive, good quality (Dell R520, R515, T110 II) servers, all within the last couple months. I know from my experiences that a good quality server makes the difference between having a system that just works when it's set up, and running through weeks of troubleshooting only to find out it's a cheap Realtek NIC chip causing performance issues that slow down the whole system.
I own two Dell T110 II machines right now. They cost me, with 3 year warranty, less than equal components I would have had to put together to get equal performance. One is a E3-1230 and the other is a E3-1220v2. One was slightly less than $800, and the other was slightly over $600, with dual port iSCSI and TOE offload NICs, one with 8GB and the other with 16GB of memory. Also, these are near silent and extremely reliable. I use one for my router/DHCP/DNS server, among other functions.
Yes, they both used Seagate drives, but they are Constellation ES drives. From my experience with an install base of over 3000 Seagate drives and over 1000 WD RE3 drives, along with others, that the Seagate Constellation drives are the most reliable, by far, of any drives on the market right now. Hitachi Ultrastar would be slightly behind them, WD RE drives behind those a ways, and Toshiba/Fujitsu drives being far in the back.
Third, yes, the performance of Xeon processors would be nearly equal to the Core i5 and i7 series processors. Oddly, Core i3 chips since IB have had ECC functionality for low level server work. Intel just steps the different chips differently, such as lower clock rates and higher core count in the E5 line. (I'd love to find a E5-2400v2 chip with quad cores and >3.2GHz clock rate, but the quad cores cap out at 2.2GHz. A single socket LGA1356 board would be nice, too, but those don't seem to exist.) I believe it is all to provide tiers for VM hosts rather than high performance levels for servers. There really wouldn't be a point to comparing the two platforms.
Finally, as a professional systems admin, a DIYer, a gamer, my family's tech support and system builder, and finally an overclocker, all for over 20 years, I do many things with my system that most people don't. I run VMWare Player on my main machine, which has a Core i7 3930k overclocked to 4.5GHz and 32GB of RAM, to give me practice and training on Windows and Linux server builds. I have all my storage on one, separate, machine running Starwind iSCSI. Finally, I have the two Dell boxes, one of which runs VMWare ESXi 5.1 for additional, long term VM servers. I'm all over the place. I need to keep current on software to stay relevant in my career, but I also like to game and play around with various configs.
In short, you really don't know what you're talking about. You need more experience in the rest of the world before you go spouting out things like this.