Custom External Storage Array with Cluster Support

I've been in the server admin role for a long time and of course there are prepackaged arrays that can do this, however I'd like to do a home project on the cheap (relatively). My goal is to build my own external storage array with the following requirements:

Hotswappable SATA II (at least 12 bays)
Redundant power supply
Supports at least a 2 node direct attached cluster, preferrably active/active

I have multiple servers for the nodes, but I am unsure on all the pieces I would need for the external array. I want it to be a dumb box (JBOD) that just directly connects to each of the nodes over eSATA (Inifiniband was brought up but I'm a little fuzzy on how that works and was told it was going the way of the dodo). I didn't want a full motherboard and processor in it. I was iffy on whether the raid card would sit in the external array or in each node. If in the external array I would have to special one that runs on it's own (not sure if these even exist). If the raid card(s) go in the nodes (I think this is probably right) then it may go something like this.

I was looking at this RM23212 from Chenbro as the enclosure. That takes care of the hotswap bays and I can get a redundant powersupply with it but that's where I start to get fuzzy.

From my research it seems like I would need a SATA multiplexer for each drive so that it could talk to multiple hosts, like these:

Then I would need SATA Multipliers to combine that many sata connections into "trunked" eSATA connections to each node like these:

I guess my question is, is it even possible for me to do without developing my own firmware? Which way is it, RAID card in the external array or in each node - or can you do either? Pros/Cons to each way? Am I missing any components or am I overlooking problems? If I am correct, what are some good components that I may not have found that have good compatibility? Any other advice other than, just buy a prebuilt OEM?

I was looking at using perhaps OpenFiler (I've read it can do clustering) on the nodes. Remember I want to go SATAII for the direct attached storage (JBOD) to the nodes and connectivity from the nodes will be iSCSI/NFS. Thanks for any and all constructive input.
6 answers Last reply
More about custom external storage array cluster support
  1. SAS would be a lot easier, as each drive has two ports native. JBOD would need 2 SAS expanders. Then you need two raid controllers one in each server.

    Another option is to buy a SSB compliant Drive box and two SBB RAID controllers. Then servers could connect to each of the drive boxes.
  2. No doubt SAS would be easier, however from my searches it is much more expensive (though I'll take links if I am wrong :) ). I may be showing my ignorance here but what do you mean by SSB?
  3. Sorry... It is SBB: Storage Bridge Bay standards:
    This allows for any vendor to create a card that complies with the SBB specification and it can be used in any back plane that complies with the specification. This increased standardization and open up competition, thus will lower cost.
  4. Do you know of any SBB compliant components? I've googled some but I only find information talking about the standards and the group in general.
  5. ATA over Ethernet (AOE) may be a potential solution for this use case. It uses raw ethernet frames instead of TCP so you don't need a specialized network card to do the heavy lifting as you would in a comparable iSCSI setup. You would then just need to format the exposed storage node with a clustered file system. I've used AOE for single point use, but there are several Google hits on ata over ethernet cluster.
    Good luck,
  6. Ben - I would like to build and starting new, storage drives have decreased in price. Can you sure any updates and/or links. Thanks.
Ask a new question

Read More

NAS / RAID External Storage Storage