Sign in with
Sign up | Sign in

The Rules, Contenders, And Test Setup

Three Xeon E5 Server Systems From Intel, Tyan, And Supermicro
By

We set some ground rules for the vendors that wanted to participate in this story. The manufacturers received instruction that all platforms were to have:

  • At least two LGA 2011 interfaces supporting at least 135 W CPUs to accommodate the Xeon E5-2690 processors we planned to use.
  • At least two gigabit Ethernet ports (excluding any dedicated IPMI NIC.) Most server motherboards already include at least two gigabit ports. Although some vendors offer solutions with four or more on-board ports, we wanted the option for at least two.
  • IPMI or similar remote monitoring and KVM-over-IP integrated or through an included add-in module. Nobody has industry-wide numbers breaking out how many servers have IPMI 2.0 and remote KVM-over-IP built-in versus provided by an external controller, though we've heard that as many as 70% of these platforms ship with these features.
  • No additional SAS controller (such as an value-added LSI SAS2008). However, PCH-enabled SAS is OK. One of the major new features with the Patsburg chipset is its ability to enable up to eight SAS 3Gb/s ports using an on-die Storage Controller Unit. This is really an aggressive move by Intel, as LSI sells many of its own controllers that augment Intel-based servers. If Intel can achieve parity with LSI in terms of features, then this will become a major industry trend.
  • No built-in InfiniBand or 10 GbE, as typical servers in this segment do not have those more enterprise-oriented extras as integrated features. Vendors do offer solutions with them, but in lower volumes.
  • A price, including any required accessories, below $1800. This figure may seem high, but it is actually very close to average for a barebones chassis, motherboard, and redundant power supplies in the segment we're targeting.


Generally, most vendors complied with our list, which made our comparison easier. There were a few exceptions, which we'll note as we go through the story. Notice that we didn't ask the manufacturers to adhere to a size restriction, since we wanted to make sure that vendors had the flexibility to submit different types of servers. As the picture on the previous page reveals, all three vendors submitted 2U chassis. That's a pretty telling indicator of what the industry considers its sweet spot.

The other key piece of this puzzle is the processors we're using. Intel sent two Xeon E5-2690 CPUs, which are very similar to the workstation E5-2687W parts reviewed previously (except that 200 MHz of base clock rate is sacrificed for a 15 W-lower TDP, a trade-off that makes sense in a cramped 2U enclosure). The Sandy Bridge-EP-based Xeon E5 platform's new features were covered in this short video, which is worth watching as an overview.

Tom's Hardware Talks To Intel Architect Ken Creta

Another important consideration was our choice of memory. Trying to be vendor-agnostic, Micron, Kingston, and Samsung memory were all used with each barebones system. Also, the systems were tested with both unbuffered ECC and registered DIMMs. After verifying that compatibility was uniform across each system, we settled on registered DIMMs for testing, since they're most common in dual-processor servers.

Test Hardware
Processors
2 x Intel Xeon E5-2690W (Sandy Bridge-EP) 2.9 GHz, Eight Cores, LGA 2011, 8 GT/s QPI, 20 MB Shared L3, Hyper-Threading enabled, Power-savings enabled
Barebones
Supermicro 6027R-N3RF4+

Tyan GN70-K7053 (S7053 + KGN70M1)

Intel R2208GZ4GC "Grizzly Pass"
Memory
Micron 64 GB (16 x 4 GB) DDR3-1333CL9 Registered ECC

Samsung 64 GB (16 x 4 GB) DDR3-1333CL9 Registered ECC

Kingston 32 GB (16 x 4 GB) DDR3-1333 CL9 Unbuffered
Hard Drive
2 x OCZ Vertex 3 120 GB in RAID 0
Graphics
On-board graphics from each platform
Power Supply
Included power supplies from each platform
System Software And Drivers
Operating System
Windows 7 Ultimate 64-bit
Ubuntu Linux 10.10 64-bit


An abbreviated test suite was used for this round-up, since the performance differences between platforms ended up being very slight (in light of common processors and memory). All tests were run using a clean installation of Microsoft Windows 7 64-bit, except the Folding@Home test that used Ubuntu Linux 10.10 64-bit. The older version of Ubuntu is probably the most widely-used distribution for this application, as it produces consistently high performance.

Benchmarks and Settings
Applications
Adobe After Effects
Version: CS5.5
Tom's Hardware Workload, SD project with three picture-in-picture frames, source video at 720p, Render Multiple Frames Simultaneously
Adobe Photoshop
Version: CS5
Tom's Hardware Workload, Radial Blur, Shape Blur, Median, Polar Coordinates filters
Adobe Premiere Pro
Version: CS5.5
Paladin Workload, Maximum Render Quality, H.264 Blu-ray profile
Euler3D
CFD simulation over NACA 445.6 aeroelastic test wing at Mach .5
3ds Max 2012
Version: 10 x64
Rendering Space Flyby Mentalray (SPECapc_3dsmax9), Frame: 248, Resolution: 1440 x 1080
Visual Studio 2010
Compile Chrome project (1/31/2012) with devenv.com /build Release
Folding@Home
6903 Work Unit under Ubuntu 10.10 64-bit
-smp -bigadv CLI client parameters
Synthetic Benchmarks and Settings
Cinebench 11.5
CPU Test, Built-in benchmark
Ask a Category Expert

Create a new thread in the Reviews comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display all 20 comments.
This thread is closed for comments
  • 6 Hide
    mayankleoboy1 , May 30, 2012 4:53 AM
    the charts are looking strange. they need to be reduced in size a bit....
  • 0 Hide
    EzioAs , May 30, 2012 5:55 AM
    Quote:
    the charts are looking strange. they need to be reduced in size a bit....


    I agree. Just reduce it a little bit but don't make it too hard to see
  • 0 Hide
    dogman_1234 , May 30, 2012 6:47 AM
    Cool. Now, can we compare these to Opteron systems?
  • 4 Hide
    TheBigTroll , May 30, 2012 11:06 AM
    no comparison needed. intel usually wins
  • 4 Hide
    willard , May 30, 2012 2:08 PM
    TheBigTrollno comparison needed. intel usually wins

    Usually? The E5s absolutely crush AMD's best offerings. AMD's top of the line server chips are about equal in performance to Intel's last generation of chips, which are now more than two years old. It's even more lopsided than Sandy Bridge vs. Bulldozer.
  • 2 Hide
    Malovane , May 30, 2012 2:09 PM
    dogman_1234Cool. Now, can we compare these to Opteron systems?


    As an AMD fan, I wish we could. But while Magny-Cours was competitive with the last gen Xeons, AMD doesn't really have anything that stacks up against the E5. In pretty much every workload, E5 dominates the 62xx or the 61xx series by 30-50%. The E5 is even price competitive at this point.

    We'll just have to see how Piledriver does.

  • 0 Hide
    jaquith , May 30, 2012 2:51 PM
    Hmm...in comparison my vote is the Dell PowerEdge R720 http://www.dell.com/us/business/p/poweredge-r720/pd?oc=bectj3&model_id=poweredge-r720 it's better across the board i.e. no comparison. None of this 'testing' is applicable to these servers.
  • 3 Hide
    lilcinw , May 30, 2012 6:04 PM
    Finally we have some F@H benches!! Thank you!

    Having said that I would suggest you include expected PPD for the given TPF since that is what folders look at when deciding on hardware. Or you could just devote 48 hours from each machine to generate actual results for F@H and donate those points to your F@H team (yes Tom's has a team [40051] and visibility is our biggest problem).
  • 1 Hide
    dogman_1234 , May 30, 2012 6:46 PM
    lilcinwFinally we have some F@H benches!! Thank you!Having said that I would suggest you include expected PPD for the given TPF since that is what folders look at when deciding on hardware. Or you could just devote 48 hours from each machine to generate actual results for F@H and donate those points to your F@H team (yes Tom's has a team [40051] and visibility is our biggest problem).

    The issue is that other tech sites promote their teams. We do not have a promotive site. Even while mentioning F@H, some people do not agree with it or will never want to participate. It is a mentality. However, it is a choice!
  • 2 Hide
    lilcinw , May 30, 2012 6:58 PM
    I don't expect promotion at this point, just basic recognition would be appreciated.
  • 1 Hide
    Luscious , May 30, 2012 9:05 PM
    I've seen a few 2U 4-node blade servers from Asus and Supermicro running dual-socket E5's - just the thought of populating a single 42U rack with those things makes the mind boggle (168 processors, 1344 cores, 2688 threads, 42TB of total RAM and 1008TB of storage).

    F@H on such a monster? Do the math and you'll see that just after one year of 24/7 operation you would rack up over 3 billion points, putting you in the top 10 for teams and no.1 spot for single user.

    That's assuming, of course, that you've forked out $20k for your monthly power bill to run that fully-stocked 42U rack and paid $240k to your utility company for the entire year. Then there's the cost of the hardware itself - around $26k for each 2U server, or around a cool $600,000.

    SPEND MONEY FAST
  • 0 Hide
    utomo88 , May 31, 2012 12:43 AM
    We need somebody to design new server which is powerful and can handle a lot of memory and harddisk and affordable price.
    all powerful server are expensive now.
    I believe market for cheap but powerful server are big, and no one is working on this area.
    I know the profit is not big, but by big quantity it mean big money too :) 
  • 0 Hide
    bit_user , May 31, 2012 2:45 AM
    Would be cool to test both a E5 Xeon and a Sandybridge-E of the same clock speed, in the same C602 motherboard (Supermicro's X9SRA), to see if DDIO has any performance impact on 3D and GPGPU benchmarks.
  • 0 Hide
    bit_user , May 31, 2012 8:11 AM
    Also, how about posting measuring the scaling from 1x 4core E5 Xeon -> 1x 8core E5 Xeon -> 2x 4core E5 Xeon?

    The point is that memory is directly connected to 1 CPU only. Adding a 2nd CPU doubles aggregate bandwidth, but could actually hurt performance, if the software isn't written to carefully to localize data and manage affinity between threads & CPUs.
  • 0 Hide
    _zxzxzx_ , May 31, 2012 2:44 PM
    Would be nice to see how these compare to the AMD systems.
  • 0 Hide
    saturn85 , June 2, 2012 6:22 PM
    nice folding@home benchmark.
    great work.
  • 0 Hide
    centosfan , June 4, 2012 10:14 PM
    Why don't you include HP DL series and the Dell R series. Those two companies have about 70% market share on the 2U Inter server market. I don't understand why you would exclude them. Most companies wouldn't buy anything but a Dell, HP or and IBM.
  • 1 Hide
    pjkenned , June 5, 2012 7:41 PM
    centosfanWhy don't you include HP DL series and the Dell R series. Those two companies have about 70% market share on the 2U Inter server market. I don't understand why you would exclude them. Most companies wouldn't buy anything but a Dell, HP or and IBM.


    That is something that we are looking at. This was more of a look at what is out there for barebones kits. I totally agree that these types of comparisons would be great.
  • 0 Hide
    DVFinn , June 19, 2012 8:15 PM
    I've been using Supermicro bare bones servers for a few years now and I love them. When it comes to performance per dollar there's no way to compare these systems with off the shelf dell or HP offerings. I've specced them out 20 ways and the bottom line is I can build nearly 2x the horsepower at significantly lower cost vs. a fully configured server from one of the big vendors. My newest SQL servers are 8xSSD Raid10 data sets and simple sata mirrors for the OS. 128GB RAM, dual, quad-core Xeons, adaptec 6805 controllers. About $5k each for the full build, and their chassis are so easy to work in the whole build takes less than 30 minutes before I'm loading the OS.
  • 0 Hide
    razor512 , August 24, 2012 11:14 AM
    utomo88We need somebody to design new server which is powerful and can handle a lot of memory and harddisk and affordable price.all powerful server are expensive now. I believe market for cheap but powerful server are big, and no one is working on this area.I know the profit is not big, but by big quantity it mean big money too


    That is already done (but as more of a work around) build a standard PC.
    Many high end gaming motherboards work well in a server environment, and can easily handle a high traffic website.
    Most web hosting does not need a super powerful server (which is why virtualization is so popular). If you are running a relatively small business and are not doing anything that is hugely CPU bound (eg, rendering) then you can save a bit of money with a decent desktop PC.