High Powered Web Servers from HP and IBM

IBM: Turning Servers Sideways

While 1U servers don’t need as much cooling as a blade server, says Gregg McKnight, CTO of IBM’s modular systems group, they still need cooling and the design is very inefficient. PC manufacturers are cramming more and more components inside the chassis, and because the height and width are fixed, the cases are getting deeper and deeper. That means cooling them becomes even more of a problem than it was before.

The fans are at the back drawing air in, but by the time the cool air gets there it has already absorbed heat from components at the front.

“The components at the rear of the rack are getting the hottest air and that’s where thermal sensors are for detecting how fast the fans must run to cool the rack,” McKnight said.

The hotter the air, the faster the fans run, which consume up to 80W of power. "The power used by the fan is proportional to the cube of the fan speed, so if you want to double the fan speed you have to use eight times the power," McKnight said.

Turning the server sideways saves a lot of that power, because the air only has to travel 15” to reach the fans rather than 25”. The fans in the new iDataPlex rack servers draw just 6 W of power and the entire rack’s average power consumption is 100 W, while competing servers typically consume an average of 1.2 KW. IBM has measured up to 40% less airflow impedance in the new system, which means you’re wasting less energy pulling air over the components at the front.

The approach is also more efficient than drawing the air up or down the racks, McKnight claims, even though warm air rises naturally.

“The bottom servers preheat the air for the middle servers and both heat it even more for the top servers,” McKnight said. “You wind up having to move a disproportionately larger amount of air through the rack because you’re moving it through nearly a seven-foot chamber, in which the air is contained and preheated - so the server on top gets roasted and has a much shorter shelf life than the ones at in the bottom.”

McKnight claims iDataPlex is anywhere from 25% to 40% more efficient than an equivalent rack of 1U servers, and needs up to 40% less air conditioning. Part of the improvements can be attributed to more efficient cooling, but IBM is also restricting configuration options to more power efficient components. The power supply has a lower wattage compared to most servers’ high-capacity power supplies, which McKnight views as excessive.

"1U systems are designed so the power supply can be used for the most egregious configurations; the hottest processors, the most memory, most high power PCI options. But in the Web 2.0 space our customers are more interested in energy efficiency," McKnight said. "They choose a 50W CPU instead of 80 or 130 Watts; they choose eight-chip DRAM instead of four so there are half as many components to draw power; [they choose] advanced memory buffers halve power consumption." A 350-W power supply would thus be a waste of money; even a power supply rated for 200 W could be used for as little as 40 or 50 W, which means it isn’t operating efficiently. As much as 30 or 40 W is lost just converting from one voltage to another.

Instead, IBM uses a 93% efficient power supply shared between two motherboards to balance operating capacity and efficiency, which puts up to 84U of servers in the 42U iDataPlex racks.

“We give up being able to support 230-W processors and the hottest configurations. We very willingly said that instead of optimizing for the tip of iceberg we were going to optimize for the masses,” McKnight said. “Very few users need it but everyone else pays the price in power consumption and in power supply cost.” Sharing the power supply cuts down on the number of cables and power distribution blocks needed. There are fewer components to fail, and although one failed power supply will bring down two motherboards at once, Web 2.0 applications are designed to cope with failures like this anyway.

  • pelagius
    No offense but this article is VERY poorly edited. Its almost to the point where its hard to read.

    "then IBM can off them." I assume you ment to say the IBM can offer them. There are a lot more mistakes as well...
    Reply
  • pelagius
    “lots of interesting I things can’t talk about now."????????
    Reply
  • This article is full of factual errors. The IBM servers aren't "turned sideways". They're simply shallow depth servers in a side by side configuration. They're still cooled front to back like a traditional server. The entire rack's power consumption isn't 100 watts. It's based on configuration and could easily run 25-30kw. And comparable servers don't necessarily draw more power. IBM has simply cloned what Rackable Systems has been doing for the past 8 years. Dell and HP also caught on to the single power supply, non-redundantly configured servers over the past few years. IBM certainly has a nice product but it's not revolutionary.
    Reply
  • You might want to remind your readers that Salesforce.com made their switch right after Michael Dell joined their board... their IT folks think Dell's quality is horrible, but they were forced to use them.
    Reply
  • It seems that Salesforce.com needs to do some research on blade systems other than Dell. HP and IBM both have very good Blade solutions that are

    A. IBM and HP Blade's take less Power & cooling than 14-16 1u servers
    B. Most data centers can only get 10 1U servers per rack right now because power is constrained.

    Its Salesforce that just blindly buy crappy gear and then justify it by saying well blades don't have the best technology so i will go and waste millions on dell's servers. ( way to help out your biz )

    if they would Just say i am to lazy to do any real research and dell buys me lunch so buy hardware from them it would be truthfully and then we would not get to blog about how incorrect his statements are.
    Reply