Sign in with
Sign up | Sign in

HP Puts 1000 Cores in a Single Rack

HP Puts 1000 Cores in a Single Rack
By

Want to get the most processing power possible in your data center to run cloud computing and Web 2.0 apps? HP is introducing the ProLiant BL2x220c G5 server blade today, which doubles the processing density by putting two servers into each half-height blade. Using Intel Xeon 5400 quad-core processors, you can put up to 1024 cores and 2 terabytes of RAM in 128 servers in a 42U c-Class rack – that’s 12.3 teraflops in eight square feet.

Fitting two servers into a single blade means leaving some things out, Iain Stephen, HP’s vice president for industry standard servers, said.

“The memory, the drives, the processors and the heatsinks are the things that take up the space in a server,” Stephen said. “We stripped off the things customers tell us they don’t value: we stripped off the hot pluggable storage; we stripped off the storage redundancy; we reduced the memory footprint and we have a smaller number of DIMM sockets. When a customer looks at the 220c, they may think it’s underspecified. But on the connectivity side we’ve enhanced things; we can add InfiniBand or high-speed Ethernet. It’s a balance. If I already boot from network-attached storage, if I’m running an app where four DIMMs are sufficient, if I can compromise on local storage and the memory footprint – then I get the processor density."

HP expects customers to use the dual gigabit Ethernet network interface card and optional x8 PCI-Express mezzanine socket allowing 4x double data rate InfiniBand to connect to storage arrays like HP’s petabyte-scale ExDS9100 instead of using storage inside the rack. Making room for more processors creates a much more efficient system, Stephen said. “You have to balance the amount of processing per square foot and the power requirements. You can either drive towards ultimate density or the ultimate in efficiency, but with the 220c you get three times the density of a 1U rack,” Stephen said. “We use the same power supplies, the same fans and the same chassis, but we double up the density and hopefully get 60% better performance per watt.”

Specifically, HP claims a 60% performance-per-watt advantage over a cluster of Dell PowerEdge 1955 servers. In HP’s own tests using the SPECjbb2005 benchmark to measure business operations per second, the BL2x220c delivers 1,582.73 bops/watt compared to 958.86 bops/watt with PowerEdge.

The approach does share similarities with IBM’s iDataPlex system for Web 2.0 computing, Stephen said. “There are only so many things you can flex in the x86 architecture,” Stephen said. “You can flex the I/O, the processing, the number of sockets and cores, the memory – these are the core technologies. We’re flexing the same number of things but the way we deliver the balance is slightly different to the way IBM delivers iDataPlex.”

HP isn’t adding water cooling or other extreme measures to the new blades. Instead, it relies on the c-Class chassis’ features, such as the ability to turn off four or five of its six power supplies to deliver power at 90% efficiency. The c-Class chassis also uses 10 Active Cool Fans, based on the design of jet engines for radio-controlled model aircraft. The fans run at up to 166 miles per hour and are more efficient than the fans in each server. Initially, HP is offering only Intel Xeon 5400 quad-core or Xeon 5200 dual-core processors. HP will have AMD’s quad-core Barcelona processors in other ProLiant servers while Stephen said there could be a dual-server Barcelona blade if there’s demand. “Intel has had a performance advantage since May last year so the majority demand is for Intel processors,” Stephen said. “If Barcelona delivers - and I think it probably will - I expect customer demand to split between the two again.”

Although most customers for the BL2x220c blades will be businesses running cloud computing and Web 2.0 applications or high-performance computing systems where fitting in thousands of servers at a time is critical, with prices starting at $6,349 Stephen predicted the blades will appeal to some smaller companies. “There will be small business customers who look at this and say ‘we’re already using a small storage network, and these are ideal to use as file and print servers, Web servers and application servers,’” Stephen said. But one of the first systems will go to special effects company WETA Digital for use on films like James Cameron’s “Avatar,” “Neon Genesis Evangelion” and the “Halo” adaptation.

hp web server

Two servers in each blade, four cores in each server: the HP ProLiant BL2x220c G5 fits in twice as much processing for Web 2.0.

Display all 23 comments.
This thread is closed for comments
  • -1 Hide
    Shadow703793 , June 11, 2008 8:45 PM
    Not bad. Good Idea.
  • -3 Hide
    anonymous x , June 12, 2008 3:19 AM
    that's nice, but can it run Crysis?
  • -1 Hide
    crockdaddy , June 12, 2008 3:06 PM
    Is this what Peter Jackson needed to get Vista to run as fast as XP? :p 
  • -2 Hide
    Anonymous , June 12, 2008 5:31 PM
    "Specifically, HP claims a 60% performance-per-watt advantage over a cluster of Dell PowerEdge 1955 servers"

    Right, compare a blade to a classical 1u or 2u server. Don't compare it to Dell's blade servers because it'll smoke the HPs 8 ways to sunday.
  • -1 Hide
    Anonymous , June 12, 2008 7:15 PM
    Due to the efficiency of the power supplies and the solid-state disk drives, I would still think it would beat the Dell blade offerings in terms of performance per watt. But this is really a density play, not necessarily an efficiency play though it does beat just about any rack mount in that too. With Dell's engineering not pushing the envelope in blade hardware, it may be awhile before they can answer this. Just $6400 for two quad-core Xeon servers? That is also a price point other vendors (IBM, Sun) are going to be loathe to compete with. If I was Google or Yahoo, I'd by rooms full of these and retire all of that desktop hardware they are using for servers.
  • -1 Hide
    razor512 , June 12, 2008 7:33 PM
    But will it blend?


    seems good but very expensive
  • -1 Hide
    koreberg , June 13, 2008 3:12 PM
    @Thranx

    The 1955 is not a rack mount server, that would be a 1950. 10 1955s fit in a 7u chassis, it is not their latest product, but it is in fact a blade.

    It would be a more equal comparison if they had chosen the dell m600 or m605, which is the new blade system. However there are numerous other reasons to go with HP.
  • -1 Hide
    recones90 , June 13, 2008 4:41 PM
    Specifically, HP claims a 60% performance-per-watt advantage over a cluster of Dell PowerEdge 1955 servers"

    Yeah, compare it against DELL's previous generation of blades to get big numbers. I bet you that HP doesn't do so well against DELL's current generation of Blades (M600).
  • -1 Hide
    aznguy0028 , June 13, 2008 9:49 PM
    why does every technology page/thread always have a Crysis joke in it? said and repeated so many times to the point of annoyance.
  • -1 Hide
    markhahn , June 14, 2008 1:22 AM
    why do vendors get off on this kind of engineering masturbation? people who are in the market for significant compute farms are simply not interested in paying more for this kind of absurd density. density, after all, does not improve price/performance, or power efficiency, or managability, or peak performance. it's just a number to brag about, and it's not all that impressive anyway (commodity parts can easily put 4 sockets in 1U, and thus 672/rack. such systems are cheap, commoditized without vendor lock-in, and yes, have more dimms/socket and 90% PSU's.)

    when I see actual blade installs, I always have to laugh, because they're usually some easily impressed PHB buying a penis substitute, which winds up with one chassis alone in a rack because the machineroom can't handle the power density.

    blades: just say no to boutique packaging of commodity parts.
  • -1 Hide
    Anonymous , June 15, 2008 8:00 PM
    Its all marketing hype..

    just like when they claimed you could fit 42 x 1U servers in a 42U rack..

    if you've ever tried to cable up one of those babies you will soon realise that

    A) the cabling doesn't fit.
    B) the BTU output is way too high and would cause all of the servers to overheat.
    C) if your using UPS theres no way you can deliver enough power to that many servers in a single rack.
    D) the weight of a rack loaded that much is near on impossible to move and will put holes in most computer room floors.

    sure it looks nice but ask them to show you a free standing fully loaded rack that is turned ON.

    hahahaha


  • -1 Hide
    pogsnet , June 15, 2008 11:37 PM
    Compare that to Roadrunner, how about that?
  • -1 Hide
    pogsnet , June 16, 2008 12:56 AM
    Compare it to Roadrunner
  • -1 Hide
    razor512 , June 16, 2008 3:59 PM
    if needed the floors can be re enforced and you can use 32 gage wire if there is not enough space to fit the standard wires

    while there will be a few more fires by using wires like this, you will be able to show off to your friends the new server that you have

    PS smaller servers = bad because someone can easily put on long loose clothes then steal a server and walk out with it, then use that server to host thousands of lol cat pictures which will then be sent to your company
  • -1 Hide
    Anonymous , June 16, 2008 6:52 PM
    It's sad that HP can only compare to the 1U rack servers because Dell isn't willing to use the standard power measurement benchmark on their blade servers. So HP played Dell's own game in measuring performance per watt in a different way than using a standard power benchmark and came out with this: ftp://ftp.compaq.com/pub/products/servers/benchmarks/hp_proliant_bl260_specjbb2005_032808a.pdf

    One correction needs to be made to Toms' article. The server has four cores per socket, or up to 8 cores per cut-through server. That's intense.

    Another shot at Dell while I'm on it. Dell has two blade server models. HP has nine. That alone is killer, but then again. HP has been at it for two year longer than Dell.
  • -1 Hide
    Anonymous , June 16, 2008 6:58 PM
    alphiIts all marketing hype..just like when they claimed you could fit 42 x 1U servers in a 42U rack..if you've ever tried to cable up one of those babies you will soon realise that A) the cabling doesn't fit.B) the BTU output is way too high and would cause all of the servers to overheat.C) if your using UPS theres no way you can deliver enough power to that many servers in a single rack.D) the weight of a rack loaded that much is near on impossible to move and will put holes in most computer room floors.sure it looks nice but ask them to show you a free standing fully loaded rack that is turned ON.hahahaha


    In a raised-floor datacenter of yesteryear, that's true. If you have side CRACs or a water exchanger, the heat's not a problem. Also, if you actually use velcro cable ties or use something smart like FC, 10-GbE uplinks, or InfiniBand, the cabling isn't a problem either. Then, if you use a scalable UPS that can push between 36-60kW in a rack, you can fill a short isle with these blades. You just have to realize that the customers for this solution have those capabilities. Those who are not willing to update their facilities but think they can use increaed computing density are not being realistic with themselves because even the modern 1U rack servers will likely pull more power and produce more BTU than they can handle.
  • -1 Hide
    jjt3hii , June 20, 2008 12:23 AM
    hp is lush. ibm does it wit petaflops. wtf is dell?
  • 0 Hide
    NuclearShadow , June 21, 2008 7:10 AM
    I don't care if its the best or not I wouldn't mind owning one. A shame HP isn't having a contest to win one of these babies.
  • 0 Hide
    Anonymous , June 25, 2008 10:14 PM
    I'm thinking that Virtualization is the way to go. Have fewer, more powerful servers and use VM ware to host your multitude of low demand servers. This eliminates the cabling/power problem and is oodles cheaper.

    For another topic, since they are talking about power effiency, when are they going to combine power supplies and battery backup units. UPS's have to convert power to DC to store in the power, and then back to A/C to feed to your computer which then converts back down to DC. HP/Dell should have hardware that not only has battery backups, but supplies power directly to the servers so you don't need all that converting.
  • 0 Hide
    jjt3hii , June 29, 2008 8:54 PM
    Every major hardware vendor has made DC powered servers, storage, and switches for many years. DC blade servers as well.
Display more comments