High Powered Web Servers from HP and IBM

Redundant and RAID 6

The drives are connected by dual SAS controllers with RAID 6 data protection. With RAID 5, you can recover your data by rebuilding the RAID array after one drive fails. But if you lose a second drive while you are rebuilding the array – or if you accidentally remove the wrong drive before you start the rebuild – you could lose data. RAID 6 adds a second parity stripe across each element of the array, giving you a second level of protection. One drive in the RAID set may fail but it’s far less likely that two drives will; and when a drive fails, the data is still protected even while you’re rebuilding the array.

This adds up to a fully dual redundant switched fabric between the blades and the storage, although Callahan won’t yet go into details about the technology. “There are dual paths to each drive, dual controllers fronting each drive, dual paths from the controller to the blades, dual switches in the blade chassis and dual connections from the switches to the blade,” Callahan said.

The SAS switch in the ExDS9100 means you don’t need external switches, interconnects or extra cabling within the data center. Avoiding fibre channel helps keep the overall cost down, and HP says it uses its buying power to keep the disk prices low. Current enterprise storage costs are around $15 per gigabyte, while HP promises the ExDS9100 will be “dramatically cheaper” than the other storage it sells, costing less than $2/GB. That still adds up to $500,000 for the standard 246TB or $1.64 million for 820TB.

Callahan predicts costs will continue to drop to a price of 11 cents per GB by 2011 for spinning disks and under a dollar for the same amount of solid state drive media, while adding that HP is working on “lots of interesting I things can’t talk about now."

But when you’re pricing a petabyte system, it’s not just the purchase price that matters, it’s how much it will cost to keep it running. Multiple petabytes of storage mean there will be many thousands of spinning disks, some of which will fail. Distributed software and RAID protect data in case of disk failure, but you’ll still have to replace failed parts. Designing a system that makes it quicker and simpler to replace failed disks will thus save costs because you don’t need as much staff to run the system.

When drives or the fans in the disk enclosures fail, the PolyServe software tells you which one has failed and where – and gives you the part number for ordering a replacement. Add a new blade or replace one that’s failed and you don’t need to install software manually. When the system detects the new blade, it configures it automatically. That involves imaging it with the Linux OS, the PolyServe storage software and any apps you have chosen to run on the ExDS; booting the new blade; and adding it to the cluster. This is all done automatically. Automatically scaling the system down when you don’t need as much performance as you do during heavy server-load periods or marking data that doesn’t need to be accessed as often, also keeps costs down.

HP packs drives together more densely than most storage arrays, manages them with PolyServe to cut the costs of running the array and uses its buying power to push down the cost of the individual drives as well. Power usage and cooling needs are well within the range of what modern data centers can deliver, says Callahan but he admits that’s high, joking that “it also makes a great space heater.”

One option for reducing power and cooling costs is MAID, which comprises massive arrays of inactive disks that power down most of the drives most of the time. The first generation of ExDS does use MAID, but Callahan says HP is looking at the options. “Obviously it would be a nice idea, if you have a lot of drives in an environment, to not to have to spin all of them all the time,” Callahan said. “In the first release, though, we do spin all of them all the time.”

  • pelagius
    No offense but this article is VERY poorly edited. Its almost to the point where its hard to read.

    "then IBM can off them." I assume you ment to say the IBM can offer them. There are a lot more mistakes as well...
    Reply
  • pelagius
    “lots of interesting I things can’t talk about now."????????
    Reply
  • This article is full of factual errors. The IBM servers aren't "turned sideways". They're simply shallow depth servers in a side by side configuration. They're still cooled front to back like a traditional server. The entire rack's power consumption isn't 100 watts. It's based on configuration and could easily run 25-30kw. And comparable servers don't necessarily draw more power. IBM has simply cloned what Rackable Systems has been doing for the past 8 years. Dell and HP also caught on to the single power supply, non-redundantly configured servers over the past few years. IBM certainly has a nice product but it's not revolutionary.
    Reply
  • You might want to remind your readers that Salesforce.com made their switch right after Michael Dell joined their board... their IT folks think Dell's quality is horrible, but they were forced to use them.
    Reply
  • It seems that Salesforce.com needs to do some research on blade systems other than Dell. HP and IBM both have very good Blade solutions that are

    A. IBM and HP Blade's take less Power & cooling than 14-16 1u servers
    B. Most data centers can only get 10 1U servers per rack right now because power is constrained.

    Its Salesforce that just blindly buy crappy gear and then justify it by saying well blades don't have the best technology so i will go and waste millions on dell's servers. ( way to help out your biz )

    if they would Just say i am to lazy to do any real research and dell buys me lunch so buy hardware from them it would be truthfully and then we would not get to blog about how incorrect his statements are.
    Reply