What It Takes to Build Web 2.0
If you’re an IT administrator for a bank and want to build a server farm for your ATM network, you make it fault tolerant and redundant, duplicating everything from power supplies to network cards. If you’re a Web 2.0 service, you use the cheapest motherboards you can get, and if something fails, you throw it away and plug in a new one. It’s not that the Website can afford to be offline any more than an ATM network can. It’s that the software running sites like Google is distributed across so many different machines in the data center that losing one or two doesn’t make any difference. As more and more companies and services use distributed applications, HP and IBM are betting there exists a better approach than a custom setup of commodity servers.
Commodity Computing – Designed to Fail
In the early days, Google literally built its server by hand, making cabinets out of plywood and mounting the Pentium II motherboards on sheets of cork. Nowadays, Google still buys commodity x86 servers because they’re cheap, although it equips them with 90% efficient custom power supplies. Google has built 10 data centers around the world in the last 18 months, which cost half a billion dollars each. According to analyst firm WinterGreen Research, Google has assembled 45% of all the Web 2.0 servers ever built to put in them.
Plenty of startups have gone down the same route on a smaller scale, because as utility prices have risen, hosting providers have shifted from charging for the space you use in their racks to charging by power consumption. Blade servers are far more efficient in terms of space but they also have a much higher power density and require more cooling than rack-mounted 1u and 2u servers.
Steve Fisher, senior vice president at Salesforce.com, believes that their racks would be half empty if they were to use blades to run the service, because the power demand would be so high. “My feeling is that blades are usually a generation of technology behind, as well,” Fisher said. “I don’t think the latest and greatest stuff is going into blades.”
Salesforce.com uses Sun Solaris servers, because that was the only choice when the company started in 1999, but Fisher has just turned on the first Dell Linux cluster in his data center and he expects to buy a lot more Dell servers. Commodity servers are popular because of the low price, but in exchange you have to spend time configuring them. Specialist suppliers like Rackable Systems, Verari and Silicon Mechanics have been offering servers designed for Web 2.0 style distributed workloads for some time. They can supply the servers in the rack, pre-configured and pre-cabled if required. Second Life runs on Silicon Mechanics servers, as do LiveJournal, WikiMedia and many other Web 2.0 services.
IBM’s Web 2.0 approach involves turning servers sideways and water cooling the rack so you can do away with air conditioning entirely. HP offers petabytes of storage at a fraction of the usual cost. Both say that when you have applications that expect hardware to fail, it’s worth choosing systems that make it easier and cheaper to deal with those failures.