Want to get the most processing power possible in your data center to run cloud computing and Web 2.0 apps? HP is introducing the ProLiant BL2x220c G5 server blade today, which doubles the processing density by putting two servers into each half-height blade. Using Intel Xeon 5400 quad-core processors, you can put up to 1024 cores and 2 terabytes of RAM in 128 servers in a 42U c-Class rack – that’s 12.3 teraflops in eight square feet.
Fitting two servers into a single blade means leaving some things out, Iain Stephen, HP’s vice president for industry standard servers, said.
“The memory, the drives, the processors and the heatsinks are the things that take up the space in a server,” Stephen said. “We stripped off the things customers tell us they don’t value: we stripped off the hot pluggable storage; we stripped off the storage redundancy; we reduced the memory footprint and we have a smaller number of DIMM sockets. When a customer looks at the 220c, they may think it’s underspecified. But on the connectivity side we’ve enhanced things; we can add InfiniBand or high-speed Ethernet. It’s a balance. If I already boot from network-attached storage, if I’m running an app where four DIMMs are sufficient, if I can compromise on local storage and the memory footprint – then I get the processor density."
HP expects customers to use the dual gigabit Ethernet network interface card and optional x8 PCI-Express mezzanine socket allowing 4x double data rate InfiniBand to connect to storage arrays like HP’s petabyte-scale ExDS9100 instead of using storage inside the rack. Making room for more processors creates a much more efficient system, Stephen said. “You have to balance the amount of processing per square foot and the power requirements. You can either drive towards ultimate density or the ultimate in efficiency, but with the 220c you get three times the density of a 1U rack,” Stephen said. “We use the same power supplies, the same fans and the same chassis, but we double up the density and hopefully get 60% better performance per watt.”
Specifically, HP claims a 60% performance-per-watt advantage over a cluster of Dell PowerEdge 1955 servers. In HP’s own tests using the SPECjbb2005 benchmark to measure business operations per second, the BL2x220c delivers 1,582.73 bops/watt compared to 958.86 bops/watt with PowerEdge.
The approach does share similarities with IBM’s iDataPlex system for Web 2.0 computing, Stephen said. “There are only so many things you can flex in the x86 architecture,” Stephen said. “You can flex the I/O, the processing, the number of sockets and cores, the memory – these are the core technologies. We’re flexing the same number of things but the way we deliver the balance is slightly different to the way IBM delivers iDataPlex.”
HP isn’t adding water cooling or other extreme measures to the new blades. Instead, it relies on the c-Class chassis’ features, such as the ability to turn off four or five of its six power supplies to deliver power at 90% efficiency. The c-Class chassis also uses 10 Active Cool Fans, based on the design of jet engines for radio-controlled model aircraft. The fans run at up to 166 miles per hour and are more efficient than the fans in each server. Initially, HP is offering only Intel Xeon 5400 quad-core or Xeon 5200 dual-core processors. HP will have AMD’s quad-core Barcelona processors in other ProLiant servers while Stephen said there could be a dual-server Barcelona blade if there’s demand. “Intel has had a performance advantage since May last year so the majority demand is for Intel processors,” Stephen said. “If Barcelona delivers - and I think it probably will - I expect customer demand to split between the two again.”
Although most customers for the BL2x220c blades will be businesses running cloud computing and Web 2.0 applications or high-performance computing systems where fitting in thousands of servers at a time is critical, with prices starting at $6,349 Stephen predicted the blades will appeal to some smaller companies. “There will be small business customers who look at this and say ‘we’re already using a small storage network, and these are ideal to use as file and print servers, Web servers and application servers,’” Stephen said. But one of the first systems will go to special effects company WETA Digital for use on films like James Cameron’s “Avatar,” “Neon Genesis Evangelion” and the “Halo” adaptation.
Two servers in each blade, four cores in each server: the HP ProLiant BL2x220c G5 fits in twice as much processing for Web 2.0.

Right, compare a blade to a classical 1u or 2u server. Don't compare it to Dell's blade servers because it'll smoke the HPs 8 ways to sunday.
seems good but very expensive
The 1955 is not a rack mount server, that would be a 1950. 10 1955s fit in a 7u chassis, it is not their latest product, but it is in fact a blade.
It would be a more equal comparison if they had chosen the dell m600 or m605, which is the new blade system. However there are numerous other reasons to go with HP.
Yeah, compare it against DELL's previous generation of blades to get big numbers. I bet you that HP doesn't do so well against DELL's current generation of Blades (M600).
when I see actual blade installs, I always have to laugh, because they're usually some easily impressed PHB buying a penis substitute, which winds up with one chassis alone in a rack because the machineroom can't handle the power density.
blades: just say no to boutique packaging of commodity parts.
just like when they claimed you could fit 42 x 1U servers in a 42U rack..
if you've ever tried to cable up one of those babies you will soon realise that
A) the cabling doesn't fit.
B) the BTU output is way too high and would cause all of the servers to overheat.
C) if your using UPS theres no way you can deliver enough power to that many servers in a single rack.
D) the weight of a rack loaded that much is near on impossible to move and will put holes in most computer room floors.
sure it looks nice but ask them to show you a free standing fully loaded rack that is turned ON.
hahahaha
while there will be a few more fires by using wires like this, you will be able to show off to your friends the new server that you have
PS smaller servers = bad because someone can easily put on long loose clothes then steal a server and walk out with it, then use that server to host thousands of lol cat pictures which will then be sent to your company
One correction needs to be made to Toms' article. The server has four cores per socket, or up to 8 cores per cut-through server. That's intense.
Another shot at Dell while I'm on it. Dell has two blade server models. HP has nine. That alone is killer, but then again. HP has been at it for two year longer than Dell.
In a raised-floor datacenter of yesteryear, that's true. If you have side CRACs or a water exchanger, the heat's not a problem. Also, if you actually use velcro cable ties or use something smart like FC, 10-GbE uplinks, or InfiniBand, the cabling isn't a problem either. Then, if you use a scalable UPS that can push between 36-60kW in a rack, you can fill a short isle with these blades. You just have to realize that the customers for this solution have those capabilities. Those who are not willing to update their facilities but think they can use increaed computing density are not being realistic with themselves because even the modern 1U rack servers will likely pull more power and produce more BTU than they can handle.
For another topic, since they are talking about power effiency, when are they going to combine power supplies and battery backup units. UPS's have to convert power to DC to store in the power, and then back to A/C to feed to your computer which then converts back down to DC. HP/Dell should have hardware that not only has battery backups, but supplies power directly to the servers so you don't need all that converting.