High Powered Web Servers from HP and IBM

What It Takes to Build Web 2.0

If you’re an IT administrator for a bank and want to build a server farm for your ATM network, you make it fault tolerant and redundant, duplicating everything from power supplies to network cards. If you’re a Web 2.0 service, you use the cheapest motherboards you can get, and if something fails, you throw it away and plug in a new one. It’s not that the Website can afford to be offline any more than an ATM network can. It’s that the software running sites like Google is distributed across so many different machines in the data center that losing one or two doesn’t make any difference. As more and more companies and services use distributed applications, HP and IBM are betting there exists a better approach than a custom setup of commodity servers.

Commodity Computing – Designed to Fail

In the early days, Google literally built its server by hand, making cabinets out of plywood and mounting the Pentium II motherboards on sheets of cork. Nowadays, Google still buys commodity x86 servers because they’re cheap, although it equips them with 90% efficient custom power supplies. Google has built 10 data centers around the world in the last 18 months, which cost half a billion dollars each. According to analyst firm WinterGreen Research, Google has assembled 45% of all the Web 2.0 servers ever built to put in them.

Plenty of startups have gone down the same route on a smaller scale, because as utility prices have risen, hosting providers have shifted from charging for the space you use in their racks to charging by power consumption. Blade servers are far more efficient in terms of space but they also have a much higher power density and require more cooling than rack-mounted 1u and 2u servers.

Steve Fisher, senior vice president at Salesforce.com, believes that their racks would be half empty if they were to use blades to run the service, because the power demand would be so high. “My feeling is that blades are usually a generation of technology behind, as well,” Fisher said. “I don’t think the latest and greatest stuff is going into blades.”

Salesforce.com uses Sun Solaris servers, because that was the only choice when the company started in 1999, but Fisher has just turned on the first Dell Linux cluster in his data center and he expects to buy a lot more Dell servers. Commodity servers are popular because of the low price, but in exchange you have to spend time configuring them. Specialist suppliers like Rackable Systems, Verari and Silicon Mechanics have been offering servers designed for Web 2.0 style distributed workloads for some time. They can supply the servers in the rack, pre-configured and pre-cabled if required. Second Life runs on Silicon Mechanics servers, as do LiveJournal, WikiMedia and many other Web 2.0 services.

IBM’s Web 2.0 approach involves turning servers sideways and water cooling the rack so you can do away with air conditioning entirely. HP offers petabytes of storage at a fraction of the usual cost. Both say that when you have applications that expect hardware to fail, it’s worth choosing systems that make it easier and cheaper to deal with those failures.

  • pelagius
    No offense but this article is VERY poorly edited. Its almost to the point where its hard to read.

    "then IBM can off them." I assume you ment to say the IBM can offer them. There are a lot more mistakes as well...
    Reply
  • pelagius
    “lots of interesting I things can’t talk about now."????????
    Reply
  • This article is full of factual errors. The IBM servers aren't "turned sideways". They're simply shallow depth servers in a side by side configuration. They're still cooled front to back like a traditional server. The entire rack's power consumption isn't 100 watts. It's based on configuration and could easily run 25-30kw. And comparable servers don't necessarily draw more power. IBM has simply cloned what Rackable Systems has been doing for the past 8 years. Dell and HP also caught on to the single power supply, non-redundantly configured servers over the past few years. IBM certainly has a nice product but it's not revolutionary.
    Reply
  • You might want to remind your readers that Salesforce.com made their switch right after Michael Dell joined their board... their IT folks think Dell's quality is horrible, but they were forced to use them.
    Reply
  • It seems that Salesforce.com needs to do some research on blade systems other than Dell. HP and IBM both have very good Blade solutions that are

    A. IBM and HP Blade's take less Power & cooling than 14-16 1u servers
    B. Most data centers can only get 10 1U servers per rack right now because power is constrained.

    Its Salesforce that just blindly buy crappy gear and then justify it by saying well blades don't have the best technology so i will go and waste millions on dell's servers. ( way to help out your biz )

    if they would Just say i am to lazy to do any real research and dell buys me lunch so buy hardware from them it would be truthfully and then we would not get to blog about how incorrect his statements are.
    Reply