Expansion slots are enabled through two riser cards, which allow for up to six single-slot add-in devices in total. Intel also offers its own proprietary network and RAID cards. You can see the spot at the bottom-right corner of the rear panel that takes NICs, such as a dual-SFP+ X540-based Intel 10 GbE upgrade. Intel armed our machine with a dual-port 10 GbE controller, but we pulled it out to maintain better parity with the other systems, per our round-up request.
The rear I/O panel does not use a standard shield design, similar to Supermicro's offering. Intel does this to accommodate a pair of what it calls PCIe 3.0 super-slots (24-lane slots that enable three x8 connectors each) for expansion. Built-in I/O includes four 8P8C female jacks driven by Intel's bridgeless i350 controller, three USB ports, a serial-over-LAN port, VGA output, and a dedicated KVM-over-IP port enabled by a small daughter card. 
Like Supermicro, Intel uses a shroud to guide airflow around its CPUs and memory. Intel's cooler employs a much lower-profile design, though, that looks like it'd even work in a 1U enclosure. As we'll see in the benchmarks, both solutions are able to keep our Xeon E5 processors at their full multi-core Turbo Boost clock rates for our entire test period. Atop of the Lexan air shroud, Intel provides two 2.5" mounting points probably best used for SSDs.
As we've mentioned, Intel sells a variety of proprietary add-on board options that let you install an upgrade without plugging up a PCI Express slot. Some of the options include 10 GbE Ethernet, LSI-based RAID controllers, and specialized management cards. In the picture above, you can see both the dedicated KVM-over-IP board and LSI-based storage card. Neither Tyan nor Supermicro make KVM-over-IP something you have to buy separately, and with so many servers shipping with that functionality already, it would have been nice to see Intel make that a standard feature, rather than an upsell.
Intel's server system, like Supermicro's, exposes eight 3 Gb/s SAS ports via a pair of SFF-8087 connectors in the middle of the board. Given that this platform has eight 2.5" bays, 6 Gb/s connectivity would have made more sense, which is probably why Intel outfitted our review unit with the LSI SAS 2208-based RAID card.
- Three 2P Xeon E5-2600 Platforms Compared: Intel, Supermicro, And Tyan
- The Rules, Contenders, And Test Setup
- Supermicro 6027R-N3RF4+: Layout And Overview
- Supermicro 6027R-N3RF4+: Layout And Overview, Continued
- Supermicro 6027R-N3RF4+: Management Features And Serviceability
- Tyan GN70-K7053: Layout And Overview
- Tyan GN70-K7053: Layout And Overview, Continued
- Tyan GN70-K7053: Management Features And Serviceability
- Intel R2208GZ4GC: Layout And Overview
- Intel R2208GZ4GC: Layout And Overview, Continued
- Intel R2208GZ4GC: Management Features And Serviceability
- Pricing, Warranty, And Support Comparison
- Benchmark Results: Adobe CS 5, 3ds Max, And Cinebench
- Benchmark Results: Compiling, Folding, And Euler
- Power Consumption And Noise Comparison
- Whose 2U Server System For Xeon E5 Is Best?


I agree. Just reduce it a little bit but don't make it too hard to see
Usually? The E5s absolutely crush AMD's best offerings. AMD's top of the line server chips are about equal in performance to Intel's last generation of chips, which are now more than two years old. It's even more lopsided than Sandy Bridge vs. Bulldozer.
As an AMD fan, I wish we could. But while Magny-Cours was competitive with the last gen Xeons, AMD doesn't really have anything that stacks up against the E5. In pretty much every workload, E5 dominates the 62xx or the 61xx series by 30-50%. The E5 is even price competitive at this point.
We'll just have to see how Piledriver does.
Having said that I would suggest you include expected PPD for the given TPF since that is what folders look at when deciding on hardware. Or you could just devote 48 hours from each machine to generate actual results for F@H and donate those points to your F@H team (yes Tom's has a team [40051] and visibility is our biggest problem).
The issue is that other tech sites promote their teams. We do not have a promotive site. Even while mentioning F@H, some people do not agree with it or will never want to participate. It is a mentality. However, it is a choice!
F@H on such a monster? Do the math and you'll see that just after one year of 24/7 operation you would rack up over 3 billion points, putting you in the top 10 for teams and no.1 spot for single user.
That's assuming, of course, that you've forked out $20k for your monthly power bill to run that fully-stocked 42U rack and paid $240k to your utility company for the entire year. Then there's the cost of the hardware itself - around $26k for each 2U server, or around a cool $600,000.
SPEND MONEY FAST
all powerful server are expensive now.
I believe market for cheap but powerful server are big, and no one is working on this area.
I know the profit is not big, but by big quantity it mean big money too
The point is that memory is directly connected to 1 CPU only. Adding a 2nd CPU doubles aggregate bandwidth, but could actually hurt performance, if the software isn't written to carefully to localize data and manage affinity between threads & CPUs.
great work.
That is something that we are looking at. This was more of a look at what is out there for barebones kits. I totally agree that these types of comparisons would be great.
That is already done (but as more of a work around) build a standard PC.
Many high end gaming motherboards work well in a server environment, and can easily handle a high traffic website.
Most web hosting does not need a super powerful server (which is why virtualization is so popular). If you are running a relatively small business and are not doing anything that is hugely CPU bound (eg, rendering) then you can save a bit of money with a decent desktop PC.