How We Tested
Today's tests involve typical 1U server platforms. Supermicro sent along a new 1U SuperServer configured with two Intel E5-2690 v3 processors and 16 x 8 GB DDR4-2133 DIMMs from Samsung. We had a similar 1U Supermicro platform and pairs of Intel Xeon E5-2690 v1 and v2 processors to create a direct comparison. The Xeon E5-2690s are generally considered the higher-end of what ends up becoming mainstream. For example, companies like Amazon use the E5-2670 v1 and v2 quite extensively in their AWS EC2 compute platforms. The -2690 generally offers the same core count, just at a higher clock rate.
Intel also sent along a 2U "Wildcat Pass" server platform that was configured with two Xeon E5-2699 v3 samples and 8 x 16 GB registered DDR4 modules (with one DIMM per channel) and two SSD DC S3500 SSDs. The E5-2699 v3 is a massive processor. It wields a full 18 cores capable of addressing 36 threads through Hyper-Threading. Forty-five megabytes of shared L3 cache maintain 2.5 MB per core, and the whole configuration fits into a 145 W TDP.
Naturally, this is going to represent a lower-volume, high-dollar server. But it's going to illustrate the full potential of Haswell-EP, too. We're using the Wildcat Pass server as our control for Intel's newest architecture.
Meanwhile, a Lenovo RD640 2U server operates as our control for Sandy Bridge-EP and Ivy Bridge-EP. It leverages 8 x 16 GB of registered DDR3 memory, totaling 128 GB. We also dropped those SSD DC S3500s in there, too.
As we make our comparisons, keep a few points in mind. First, at the time of testing, DDR4 RDIMM pricing is absolutely obscene. Street prices are several times higher per gigabyte than DDR3. This will come down over time as manufacturing ramps up. But prohibitive expense did affect our ability to configure the servers with more than 128 GB.
We are focusing today's review on processor performance and power consumption. As a result, we are using the two SSD DC S3500s with 240 GB each in a RAID 1 array. We did have a stack of trusty SanDisk Lightning 400 GB SLC SSDs available. But neither of our test platforms came with SAS connectivity. Although there are plenty of add-in controllers that would have done the job, there is clearly a market shift happening away from such configurations. Sticking with SATA-based SSDs kept the storage subsystem's power consumption relatively low, while at the same time leaning on a fairly common arrangement in servers reliant on shared network storage.
Bear in mind also that we're using 1U and 2U enclosures, each with a single server inside. The Xeon E5 series is often found in high-density configurations with multiple nodes per 1U, 2U, or 4U chassis. For instance, the venerable Dell C6100, based on Nehalem-EP and Westmere-EP, was extremely popular with large Web 2.0 outfits like Facebook and Twitter. Many of those platforms have been replaced by OpenCompute versions, but we expect many non-traditional designs to be popular with the E5-2600 v3 generation, especially given its power characteristics.