Page 2:Broadwell-EP Architecture
Page 3:Intel Test Platforms And How We Test
Page 4:Supermicro And NVMe RAID Testing: 3 Million IOPS And 21 GB/s
Page 5:Intel's 3D NAND SSD Debut: DC P3520/P3320 And DC3700/3600
Page 7:Additional Benchmarks
Page 8:Power Consumption
There is no way to understate the impact of power consumption and efficiency on the modern datacenter. Power use is the killer of all things bottom-line. Datacenters worldwide consumed an estimated 416.2 terawatts (416.2 trillion watts) of electricity in 2014, which is more than 182 entire countries (out of 192 globally). Even more alarming, datacenter power use is quadrupling every two years, and a recent Japanese study concluded that at today's rate, its datacenters will consume all of the country's electricity output by 2030.
The pressure to reduce power consumption is an overbearing burden, but at the same time, the demands for more processing power are increasing every year. Deploying more efficient CPUs simultaneously increases performance per watt, frees up floor space and relaxes cooling requirements. All three of those variables function as multipliers that can either reduce or inflate datacenter expenditures dramatically.
Intel took several important steps to reduce power consumption with its previous-generation Haswell-EP products, including the addition of an on-package power delivery system. It also added per-core P-state control and moved to DDR4 memory, which helped reduce platform power use, further improving efficiency.
The Xeon E5-2600 v4 family offers these same advantages, but also brings about the introduction of Hardware Controlled Power Management (HWPM). Under normal circumstances, most servers accept hints from the operating system that indicate when it is appropriate to adjust the power state. Unfortunately, this process is slow in relation to how fast a CPU can make decisions internally, and not all software developers are utilizing the feature.
HWPM turns power management over to the CPU, offering up to four power profiles that optimize the server for each use-case, which you can specify in the BIOS. The CPU then adjust power state settings dynamically, depending on the chosen profile. Not only is this process faster, but it's also compatible with all operating systems (the CPU simply ignores the OS hints). Intel is in the early stages of HWPM development, but expects the feature to evolve rapidly.
Linux-Bench Power Consumption
We measured power consumption during the Linux-Bench script, which provides a nice comparison point for each system as the test progresses. The first slide provides a view of the test servers in one image. Then, we provide two additional slides that paint a clearer picture.
The Supermicro platform is unsurprisingly more efficient than Intel's software development platform. After all, it's a production-class system with 80 PLUS Titanium-rated PSUs.
The Xeon E5-2699 v3 CPUs consume more power than our Broadwell-EP samples in the same Intel development platform, which speaks volumes given their similar core count. The -2643 v3 consumes less power by virtue of its lower core count. This is an important consideration; it is not wise to provision excess CPU resources that exceed the requirements of the workload. It's better to pick the right CPU for your application.
Power Load And Idle
We measured peak power consumption during a Linpack run to characterize each platform's use. Again, Supermicro's configuration is more efficient than Intel's in both maximum and idle consumption. We also recorded the watt-hours consumed during the entire Linux-Bench script, and found the Supermicro platform offering a more refined efficiency story. The second-gen Xeon E5s do use less power, but they're also a lot slower.