Sign in with
Sign up | Sign in

Intel R2208GZ4GC: Layout And Overview, Continued

Three Xeon E5 Server Systems From Intel, Tyan, And Supermicro
By

Expansion slots are enabled through two riser cards, which allow for up to six single-slot add-in devices in total. Intel also offers its own proprietary network and RAID cards. You can see the spot at the bottom-right corner of the rear panel that takes NICs, such as a dual-SFP+ X540-based Intel 10 GbE upgrade. Intel armed our machine with a dual-port 10 GbE controller, but we pulled it out to maintain better parity with the other systems, per our round-up request.

The rear I/O panel does not use a standard shield design, similar to Supermicro's offering. Intel does this to accommodate a pair of what it calls PCIe 3.0 super-slots (24-lane slots that enable three x8 connectors each) for expansion. Built-in I/O includes four 8P8C female jacks driven by Intel's bridgeless i350 controller, three USB ports, a serial-over-LAN port, VGA output, and a dedicated KVM-over-IP port enabled by a small daughter card.

Like Supermicro, Intel uses a shroud to guide airflow around its CPUs and memory. Intel's cooler employs a much lower-profile design, though, that looks like it'd even work in a 1U enclosure. As we'll see in the benchmarks, both solutions are able to keep our Xeon E5 processors at their full multi-core Turbo Boost clock rates for our entire test period. Atop of the Lexan air shroud, Intel provides two 2.5" mounting points probably best used for SSDs.

As we've mentioned, Intel sells a variety of proprietary add-on board options that let you install an upgrade without plugging up a PCI Express slot. Some of the options include 10 GbE Ethernet, LSI-based RAID controllers, and specialized management cards. In the picture above, you can see both the dedicated KVM-over-IP board and LSI-based storage card. Neither Tyan nor Supermicro make KVM-over-IP something you have to buy separately, and with so many servers shipping with that functionality already, it would have been nice to see Intel make that a standard feature, rather than an upsell.

Intel's server system, like Supermicro's, exposes eight 3 Gb/s SAS ports via a pair of SFF-8087 connectors in the middle of the board. Given that this platform has eight 2.5" bays, 6 Gb/s connectivity would have made more sense, which is probably why Intel outfitted our review unit with the LSI SAS 2208-based RAID card.

Ask a Category Expert

Create a new thread in the Reviews comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display all 20 comments.
This thread is closed for comments
  • 6 Hide
    mayankleoboy1 , May 30, 2012 4:53 AM
    the charts are looking strange. they need to be reduced in size a bit....
  • 0 Hide
    EzioAs , May 30, 2012 5:55 AM
    Quote:
    the charts are looking strange. they need to be reduced in size a bit....


    I agree. Just reduce it a little bit but don't make it too hard to see
  • 0 Hide
    dogman_1234 , May 30, 2012 6:47 AM
    Cool. Now, can we compare these to Opteron systems?
  • 4 Hide
    TheBigTroll , May 30, 2012 11:06 AM
    no comparison needed. intel usually wins
  • 4 Hide
    willard , May 30, 2012 2:08 PM
    TheBigTrollno comparison needed. intel usually wins

    Usually? The E5s absolutely crush AMD's best offerings. AMD's top of the line server chips are about equal in performance to Intel's last generation of chips, which are now more than two years old. It's even more lopsided than Sandy Bridge vs. Bulldozer.
  • 2 Hide
    Malovane , May 30, 2012 2:09 PM
    dogman_1234Cool. Now, can we compare these to Opteron systems?


    As an AMD fan, I wish we could. But while Magny-Cours was competitive with the last gen Xeons, AMD doesn't really have anything that stacks up against the E5. In pretty much every workload, E5 dominates the 62xx or the 61xx series by 30-50%. The E5 is even price competitive at this point.

    We'll just have to see how Piledriver does.

  • 0 Hide
    jaquith , May 30, 2012 2:51 PM
    Hmm...in comparison my vote is the Dell PowerEdge R720 http://www.dell.com/us/business/p/poweredge-r720/pd?oc=bectj3&model_id=poweredge-r720 it's better across the board i.e. no comparison. None of this 'testing' is applicable to these servers.
  • 3 Hide
    lilcinw , May 30, 2012 6:04 PM
    Finally we have some F@H benches!! Thank you!

    Having said that I would suggest you include expected PPD for the given TPF since that is what folders look at when deciding on hardware. Or you could just devote 48 hours from each machine to generate actual results for F@H and donate those points to your F@H team (yes Tom's has a team [40051] and visibility is our biggest problem).
  • 1 Hide
    dogman_1234 , May 30, 2012 6:46 PM
    lilcinwFinally we have some F@H benches!! Thank you!Having said that I would suggest you include expected PPD for the given TPF since that is what folders look at when deciding on hardware. Or you could just devote 48 hours from each machine to generate actual results for F@H and donate those points to your F@H team (yes Tom's has a team [40051] and visibility is our biggest problem).

    The issue is that other tech sites promote their teams. We do not have a promotive site. Even while mentioning F@H, some people do not agree with it or will never want to participate. It is a mentality. However, it is a choice!
  • 2 Hide
    lilcinw , May 30, 2012 6:58 PM
    I don't expect promotion at this point, just basic recognition would be appreciated.
  • 1 Hide
    Luscious , May 30, 2012 9:05 PM
    I've seen a few 2U 4-node blade servers from Asus and Supermicro running dual-socket E5's - just the thought of populating a single 42U rack with those things makes the mind boggle (168 processors, 1344 cores, 2688 threads, 42TB of total RAM and 1008TB of storage).

    F@H on such a monster? Do the math and you'll see that just after one year of 24/7 operation you would rack up over 3 billion points, putting you in the top 10 for teams and no.1 spot for single user.

    That's assuming, of course, that you've forked out $20k for your monthly power bill to run that fully-stocked 42U rack and paid $240k to your utility company for the entire year. Then there's the cost of the hardware itself - around $26k for each 2U server, or around a cool $600,000.

    SPEND MONEY FAST
  • 0 Hide
    utomo88 , May 31, 2012 12:43 AM
    We need somebody to design new server which is powerful and can handle a lot of memory and harddisk and affordable price.
    all powerful server are expensive now.
    I believe market for cheap but powerful server are big, and no one is working on this area.
    I know the profit is not big, but by big quantity it mean big money too :) 
  • 0 Hide
    bit_user , May 31, 2012 2:45 AM
    Would be cool to test both a E5 Xeon and a Sandybridge-E of the same clock speed, in the same C602 motherboard (Supermicro's X9SRA), to see if DDIO has any performance impact on 3D and GPGPU benchmarks.
  • 0 Hide
    bit_user , May 31, 2012 8:11 AM
    Also, how about posting measuring the scaling from 1x 4core E5 Xeon -> 1x 8core E5 Xeon -> 2x 4core E5 Xeon?

    The point is that memory is directly connected to 1 CPU only. Adding a 2nd CPU doubles aggregate bandwidth, but could actually hurt performance, if the software isn't written to carefully to localize data and manage affinity between threads & CPUs.
  • 0 Hide
    _zxzxzx_ , May 31, 2012 2:44 PM
    Would be nice to see how these compare to the AMD systems.
  • 0 Hide
    saturn85 , June 2, 2012 6:22 PM
    nice folding@home benchmark.
    great work.
  • 0 Hide
    centosfan , June 4, 2012 10:14 PM
    Why don't you include HP DL series and the Dell R series. Those two companies have about 70% market share on the 2U Inter server market. I don't understand why you would exclude them. Most companies wouldn't buy anything but a Dell, HP or and IBM.
  • 1 Hide
    pjkenned , June 5, 2012 7:41 PM
    centosfanWhy don't you include HP DL series and the Dell R series. Those two companies have about 70% market share on the 2U Inter server market. I don't understand why you would exclude them. Most companies wouldn't buy anything but a Dell, HP or and IBM.


    That is something that we are looking at. This was more of a look at what is out there for barebones kits. I totally agree that these types of comparisons would be great.
  • 0 Hide
    DVFinn , June 19, 2012 8:15 PM
    I've been using Supermicro bare bones servers for a few years now and I love them. When it comes to performance per dollar there's no way to compare these systems with off the shelf dell or HP offerings. I've specced them out 20 ways and the bottom line is I can build nearly 2x the horsepower at significantly lower cost vs. a fully configured server from one of the big vendors. My newest SQL servers are 8xSSD Raid10 data sets and simple sata mirrors for the OS. 128GB RAM, dual, quad-core Xeons, adaptec 6805 controllers. About $5k each for the full build, and their chassis are so easy to work in the whole build takes less than 30 minutes before I'm loading the OS.
  • 0 Hide
    razor512 , August 24, 2012 11:14 AM
    utomo88We need somebody to design new server which is powerful and can handle a lot of memory and harddisk and affordable price.all powerful server are expensive now. I believe market for cheap but powerful server are big, and no one is working on this area.I know the profit is not big, but by big quantity it mean big money too


    That is already done (but as more of a work around) build a standard PC.
    Many high end gaming motherboards work well in a server environment, and can easily handle a high traffic website.
    Most web hosting does not need a super powerful server (which is why virtualization is so popular). If you are running a relatively small business and are not doing anything that is hugely CPU bound (eg, rendering) then you can save a bit of money with a decent desktop PC.