I've been in the server admin role for a long time and of course there are prepackaged arrays that can do this, however I'd like to do a home project on the cheap (relatively). My goal is to build my own external storage array with the following requirements:
Hotswappable SATA II (at least 12 bays)
Redundant power supply
Supports at least a 2 node direct attached cluster, preferrably active/active
I have multiple servers for the nodes, but I am unsure on all the pieces I would need for the external array. I want it to be a dumb box (JBOD) that just directly connects to each of the nodes over eSATA (Inifiniband was brought up but I'm a little fuzzy on how that works and was told it was going the way of the dodo). I didn't want a full motherboard and processor in it. I was iffy on whether the raid card would sit in the external array or in each node. If in the external array I would have to special one that runs on it's own (not sure if these even exist). If the raid card(s) go in the nodes (I think this is probably right) then it may go something like this.
I was looking at this RM23212 from Chenbro as the enclosure. That takes care of the hotswap bays and I can get a redundant powersupply with it but that's where I start to get fuzzy.
From my research it seems like I would need a SATA multiplexer for each drive so that it could talk to multiple hosts, like these:
I guess my question is, is it even possible for me to do without developing my own firmware? Which way is it, RAID card in the external array or in each node - or can you do either? Pros/Cons to each way? Am I missing any components or am I overlooking problems? If I am correct, what are some good components that I may not have found that have good compatibility? Any other advice other than, just buy a prebuilt OEM?
I was looking at using perhaps OpenFiler (I've read it can do clustering) on the nodes. Remember I want to go SATAII for the direct attached storage (JBOD) to the nodes and connectivity from the nodes will be iSCSI/NFS. Thanks for any and all constructive input.
Sorry... It is SBB: Storage Bridge Bay standards: http://www.sbbwg.org/home
This allows for any vendor to create a card that complies with the SBB specification and it can be used in any back plane that complies with the specification. This increased standardization and open up competition, thus will lower cost.
ATA over Ethernet (AOE) may be a potential solution for this use case. It uses raw ethernet frames instead of TCP so you don't need a specialized network card to do the heavy lifting as you would in a comparable iSCSI setup. You would then just need to format the exposed storage node with a clustered file system. I've used AOE for single point use, but there are several Google hits on ata over ethernet cluster.