Third-up is Intel's R2208GZ4GC "Grizzly Pass" kit, which the company touts as being highly customizable. To that end, we see that the front of the chassis is partitioned off into three sections to facilitate different drive options. The kit came with both 10 GbE and LSI RAID controller options, which I removed for testing and price comparisons. Our test mule has eight 2.5" hot-swappable bays, but the chassis does allow for a second block of hot-swappable bays if you want to configure it for 16.
Like Supermicro and Tyan, Intel makes room for a slim optical drive. Most 2U enclosures designed with high density in mind sacrifice this space (along with front-panel I/O) to cram as many as 24 drives into the front of the chassis. All three of the samples sent to us include two front-panel USB ports and a slim optical bay, though. Whereas the Supermicro chassis also features a front-accessible DE-9 serial connector, Intel's solution includes an HD-15 VGA connector. Intel's front-panel connections are tailored for KVM.
It's pretty apparent that Intel is using a heavily customized implementation, and its PCB is absolutely huge. Officially, the S2600GZ4 motherboard is a proprietary 16.5 x 16.5" form factor. Just to give you an idea of this board's size, a typical quad-socket AMD G34-based platform with 32 DIMM slots fits in a 16.5" x 13" form factor. The benefit of such a large PCB is that you can see Intel's enclosure is built to exploit the additional size.
There are five 80 mm fans in the middle of the chassis that blow air through thermally-sensitive components. A clear plastic duct guides air through passive CPU heat sinks. Whereas Supermicro's shroud channeled air all the way through to the back of its enclosure, Intel employs a shorter duct, since its chassis uses much of the space around back for PCIe rises and power supplies.
Two 80 PLUS Platinum-rated 750 W power supplies connect directly to the motherboard. This is a significant difference between Intel's implementation and the competition from Tyan and Supermicro, which utilize an intermediate distribution board for power.
Both redundant power supplies are removed by pushing on the teal lever and pulling the handle back. Intel uses a side-by-side configuration and does exhaust air from above the PSUs.
- Three 2P Xeon E5-2600 Platforms Compared: Intel, Supermicro, And Tyan
- The Rules, Contenders, And Test Setup
- Supermicro 6027R-N3RF4+: Layout And Overview
- Supermicro 6027R-N3RF4+: Layout And Overview, Continued
- Supermicro 6027R-N3RF4+: Management Features And Serviceability
- Tyan GN70-K7053: Layout And Overview
- Tyan GN70-K7053: Layout And Overview, Continued
- Tyan GN70-K7053: Management Features And Serviceability
- Intel R2208GZ4GC: Layout And Overview
- Intel R2208GZ4GC: Layout And Overview, Continued
- Intel R2208GZ4GC: Management Features And Serviceability
- Pricing, Warranty, And Support Comparison
- Benchmark Results: Adobe CS 5, 3ds Max, And Cinebench
- Benchmark Results: Compiling, Folding, And Euler
- Power Consumption And Noise Comparison
- Whose 2U Server System For Xeon E5 Is Best?





I agree. Just reduce it a little bit but don't make it too hard to see
Usually? The E5s absolutely crush AMD's best offerings. AMD's top of the line server chips are about equal in performance to Intel's last generation of chips, which are now more than two years old. It's even more lopsided than Sandy Bridge vs. Bulldozer.
As an AMD fan, I wish we could. But while Magny-Cours was competitive with the last gen Xeons, AMD doesn't really have anything that stacks up against the E5. In pretty much every workload, E5 dominates the 62xx or the 61xx series by 30-50%. The E5 is even price competitive at this point.
We'll just have to see how Piledriver does.
Having said that I would suggest you include expected PPD for the given TPF since that is what folders look at when deciding on hardware. Or you could just devote 48 hours from each machine to generate actual results for F@H and donate those points to your F@H team (yes Tom's has a team [40051] and visibility is our biggest problem).
The issue is that other tech sites promote their teams. We do not have a promotive site. Even while mentioning F@H, some people do not agree with it or will never want to participate. It is a mentality. However, it is a choice!
F@H on such a monster? Do the math and you'll see that just after one year of 24/7 operation you would rack up over 3 billion points, putting you in the top 10 for teams and no.1 spot for single user.
That's assuming, of course, that you've forked out $20k for your monthly power bill to run that fully-stocked 42U rack and paid $240k to your utility company for the entire year. Then there's the cost of the hardware itself - around $26k for each 2U server, or around a cool $600,000.
SPEND MONEY FAST
all powerful server are expensive now.
I believe market for cheap but powerful server are big, and no one is working on this area.
I know the profit is not big, but by big quantity it mean big money too
The point is that memory is directly connected to 1 CPU only. Adding a 2nd CPU doubles aggregate bandwidth, but could actually hurt performance, if the software isn't written to carefully to localize data and manage affinity between threads & CPUs.
great work.
That is something that we are looking at. This was more of a look at what is out there for barebones kits. I totally agree that these types of comparisons would be great.
That is already done (but as more of a work around) build a standard PC.
Many high end gaming motherboards work well in a server environment, and can easily handle a high traffic website.
Most web hosting does not need a super powerful server (which is why virtualization is so popular). If you are running a relatively small business and are not doing anything that is hugely CPU bound (eg, rendering) then you can save a bit of money with a decent desktop PC.