While 1U servers don’t need as much cooling as a blade server, says Gregg McKnight, CTO of IBM’s modular systems group, they still need cooling and the design is very inefficient. PC manufacturers are cramming more and more components inside the chassis, and because the height and width are fixed, the cases are getting deeper and deeper. That means cooling them becomes even more of a problem than it was before.
The fans are at the back drawing air in, but by the time the cool air gets there it has already absorbed heat from components at the front.
“The components at the rear of the rack are getting the hottest air and that’s where thermal sensors are for detecting how fast the fans must run to cool the rack,” McKnight said.
The hotter the air, the faster the fans run, which consume up to 80W of power. "The power used by the fan is proportional to the cube of the fan speed, so if you want to double the fan speed you have to use eight times the power," McKnight said.
Turning the server sideways saves a lot of that power, because the air only has to travel 15” to reach the fans rather than 25”. The fans in the new iDataPlex rack servers draw just 6 W of power and the entire rack’s average power consumption is 100 W, while competing servers typically consume an average of 1.2 KW. IBM has measured up to 40% less airflow impedance in the new system, which means you’re wasting less energy pulling air over the components at the front.
The approach is also more efficient than drawing the air up or down the racks, McKnight claims, even though warm air rises naturally.
“The bottom servers preheat the air for the middle servers and both heat it even more for the top servers,” McKnight said. “You wind up having to move a disproportionately larger amount of air through the rack because you’re moving it through nearly a seven-foot chamber, in which the air is contained and preheated - so the server on top gets roasted and has a much shorter shelf life than the ones at in the bottom.”
McKnight claims iDataPlex is anywhere from 25% to 40% more efficient than an equivalent rack of 1U servers, and needs up to 40% less air conditioning. Part of the improvements can be attributed to more efficient cooling, but IBM is also restricting configuration options to more power efficient components. The power supply has a lower wattage compared to most servers’ high-capacity power supplies, which McKnight views as excessive.
"1U systems are designed so the power supply can be used for the most egregious configurations; the hottest processors, the most memory, most high power PCI options. But in the Web 2.0 space our customers are more interested in energy efficiency," McKnight said. "They choose a 50W CPU instead of 80 or 130 Watts; they choose eight-chip DRAM instead of four so there are half as many components to draw power; [they choose] advanced memory buffers halve power consumption." A 350-W power supply would thus be a waste of money; even a power supply rated for 200 W could be used for as little as 40 or 50 W, which means it isn’t operating efficiently. As much as 30 or 40 W is lost just converting from one voltage to another.
Instead, IBM uses a 93% efficient power supply shared between two motherboards to balance operating capacity and efficiency, which puts up to 84U of servers in the 42U iDataPlex racks.
“We give up being able to support 230-W processors and the hottest configurations. We very willingly said that instead of optimizing for the tip of iceberg we were going to optimize for the masses,” McKnight said. “Very few users need it but everyone else pays the price in power consumption and in power supply cost.” Sharing the power supply cuts down on the number of cables and power distribution blocks needed. There are fewer components to fail, and although one failed power supply will bring down two motherboards at once, Web 2.0 applications are designed to cope with failures like this anyway.