Water cooling for Web 2.0
Using less power to start with means the iDataPlex needs less cooling – again up to 40%. Add the optional water-cooled rear door heat exchanger and you can do away with air conditioning altogether. This fits on the back of the rack, because using the longest dimension (47”) gives you more length for heat exchange and McKnight says it’s far more efficient than using cooling the air.
“In a data center the air conditioning is 50 feet away so you blow cool air at great expense of energy under the floor past all the cables and floor tiles,” McKnight said. “It’s like painting by taking a bucket of paint and throwing it into the air.”
The heat exchanger takes the cooled water or refrigerant that’s already in the air conditioning system directly to the rack through a hose, saving 20% to 60% in cooling costs. McKnight set up an iDataPlex rack in his lab with each server using around 390W of power, dissipating a total of 32kW of power. The temperature was 72 degrees Fahrenheit at the front of the rack and 130 degrees behind the rack.
“We turned on the liquid to cool the exchanger and in 20 seconds and the temperature at the rear of the rack dropped to about 60 degrees, which means the exchanger is cooling the environment around it,” McKnight said. “We had 10,752 servers being cooled entirely by the heat exchanger, and as long as you keep the total rack power at or below 32 KW, you can remove all your air conditioning. Compare that to a 1U rack system with the same number of servers at 52 KW per rack and you can have 14 air conditioning units that cost $100,000 each and still be unable to cool those servers. “
This is different from water-cooling a desktop or gaming PC, where you have a closed loop of liquid acting as a heat sink that draws heat out by contact, or a spray cooling device, because those are only cooling the CPU, usually to allow over-clocking. You still need fans to cool components like memory and drives, and you still need to cool the air that the fans are drawing over the components; the water cooling is just to compensate for the disproportionate amount of heat that the overclocked processor produces – more than the fans are designed to cope with. Running the cooling into the individual iDataPlex servers wouldn’t make sense; plumbing for every single server and CPU would be more expensive and it’s not significantly more efficient.
If water cooling is so efficient, why isn’t it used more often in servers? “The biggest issue is an emotional one,” McKnight said. “People say ‘I don’t want water in my data center’ but you already have water near that data center for the air conditioning and you may have water on the floor above for the fire extinguishers. Time and time again I’ve spoken to people who have had water damage in their data center – when they say they don’t have water in the data center.”
Servers Like Cookie Sheets
With the back of the iDataPlex reserved for the heat exchanger, everything in the rack has to be serviced from the front. To make that easier, the power supplies and fans attach directly to the chassis of the rack and the servers slide in and out like baking trays in an oven, held in place by two thumb levers. 1U servers are often known as pizza boxes and IBM has nicknamed these “cookie sheets.” The fans are grouped into packs of four, with a single power connector and latch. Fans in 1U servers are usually replaced individually, each of which has to have its own power connector, latch mechanism and rail.
“If you’re buying 10,000 servers, with up to eight fans in a 1U server, you’re spending a lot of money up front for the opportunity to replace a single fan,” McKnight said. “Leaving out the extra power connectors makes the servers cheaper and makes up for having to replace all four fans if one fails. After enough cookie sheets and fans have failed to make it worthwhile, you can return them to IBM for servicing and replacement.”
The trays that the cookie sheets fit into can hold a motherboard, disks or PCI connectors for plug-in cards. The motherboards are industry standard SSI form factors, so although IBM only offers Intel quad-core CPUs today, if demand for AMD chips like Barcelona returns, then IBM can off them. Disks can be 3.5” SAS, 3.5” SATA or twin 2.5” SAS drives for hot swapping, with software or hardware RAID. The drives attach directly to the servers, with up to eight drives connected to each server and Ethernet, InfiniBand or ISCSI connections from the rack. The iDataPlex rack has vertical space for 16U of switches fitted between the columns of servers, so you’re not sacrificing server space.
You can use these as building blocks to choose 22 different configurations for the iDataPlex. IBM will pre-assemble the systems, which McKnight claims represents a significant saving on the cost of configuring and installing commodity servers in racks, over and above a price that’s 25% cheaper than the equivalent system built from 1U servers. “The system comes configured and pre-built,” McKnight said. “You’re not spending time unboxing servers and fixing them into the racks with screw guns and cabling them up and then spending money to debug any cabling faults.”