Sign in with
Sign up | Sign in
Your question

Twin-tailed RAID?

Last response: in Storage
Share
March 31, 2012 1:28:17 AM

Hi,

So how does one create a twin-tailed RAID (two adapters connected to one set of drives)? What RAID adapters are compatible?

More about : twin tailed raid

a c 297 G Storage
April 2, 2012 4:12:05 PM

??

Can you be more specific about your setup? My best guess is that you need SAS drives - these can be dual-hosted. They are expensive.

Are you going to have both RAID adapters in one box, just protecting against adapter failure? Or are you going to attach the drives to two machines, so that if one fails the other takes over the drives?

Take a look at this: http://www.serialstoragewire.net/Articles/2007_07/devel... and see if I'm talking about the same thing that you are.
m
0
l
April 2, 2012 4:44:53 PM

I was wondering how SAN/NAS arrays work and how to create my own. Generally they have two storage processors (two servers) that are connected to shared storage. They can do both load-balancing and high-availability at the same time.

So lets say I have two servers each with SATA/SAS (RAID?) adapters and bunch of SATA/SAS drives... what would be the proper adapters and would I need SAS switches/expander?

I do recall IBM and others were able to accomplish this long time ago and it was called twin-tailed SCSI. I think I saw some RAID adapters in the past that were capable of addressing shared storage too.

Here is some references:

http://www.intel.com/content/www/us/en/modular-server/m...

http://www.lsi.com/downloads/Public/Direct%20Assets/LSI...

What I'm trying to do it build a two NAS servers connected to a shared set of drives.
m
0
l
Related resources
April 2, 2012 4:46:45 PM

PS. I see what you mean about SAS being dual port and SATA not. Sounds like I would need interposter if using SATA.
m
0
l
a c 297 G Storage
April 2, 2012 7:41:45 PM

If you look at those, they are not implemented in hardware. They have a single device that manages the storage and software that allows multiple systems to see that storage as their own volumes.

And you have to pay to unlock the feature. You are talking some serious money here. Generally, having storage live on two systems _at the same time_ requires some sort of layer to virtualize the access and a separate storage accessory.

Everyone else: is anyone out there using something like this and willing to share your experience?
m
0
l
April 3, 2012 1:11:03 AM

WyomingKnott said:
If you look at those, they are not implemented in hardware. They have a single device that manages the storage and software that allows multiple systems to see that storage as their own volumes.

And you have to pay to unlock the feature. You are talking some serious money here. Generally, having storage live on two systems _at the same time_ requires some sort of layer to virtualize the access and a separate storage accessory.

Everyone else: is anyone out there using something like this and willing to share your experience?

Where does it say it's software? Maybe for Intel one, but look at LSI:


Quote:
With generation over generation of direct attach storage (DAS)
interconnect, the MegaRAID controllers offer the highest level of
performance and RAID data protection available. These featurerich products use x4 SAS connectors to attach to backplanes and
enclosures (see Figure 1). With the shift to the mini-SAS connector
going forward, the High-availability environment will benefit from
a denser interconnect, pairing MegaRAID controllers on different
servers together with external storage.


LSI sells keys that unlock extra functionality of the card but it's all hardware. I think I'll contact LSI to clarify this.
m
0
l
a c 297 G Storage
April 3, 2012 12:59:46 PM

It seems that we read the documents differently. What I see in the LSI flyer is two scenarios. The first has a "Common Node Storage" device which is a separate device to which the machines attach with a special adapter card. In this case, the Common Node Storage device is the storage server; the machines that attach to it are not acting as storage servers.

In the cluster-in-a-box scenario, the key feature that grabbed my eye is the arrow labeled "Failover." Based on what I have seen before (which may be misleading me), I interpreted that to mean that one system controls the disks and, if it goes down, the other takes them over. Actually, with both up they would split the disks, but that's a detail.

By all means ask LSI, and then let me know what they say. Inquiring minds want to know.
m
0
l
April 3, 2012 2:03:17 PM

WyomingKnott said:
It seems that we read the documents differently. What I see in the LSI flyer is two scenarios. The first has a "Common Node Storage" device which is a separate device to which the machines attach with a special adapter card. In this case, the Common Node Storage device is the storage server; the machines that attach to it are not acting as storage servers.

In the cluster-in-a-box scenario, the key feature that grabbed my eye is the arrow labeled "Failover." Based on what I have seen before (which may be misleading me), I interpreted that to mean that one system controls the disks and, if it goes down, the other takes them over. Actually, with both up they would split the disks, but that's a detail.

By all means ask LSI, and then let me know what they say. Inquiring minds want to know.


I asked LSI and here is what I got:

Quote:
As noted in the technical brief, "LSI is working with Microsoft to develop a low-cost high-availability solution for the SMB and Cloud datacenters." As of today, no solution, for a 'cluster in the box', is available to the public. I would suspect that any such DAS device would be available along side the availability of a Windows Server 8 release candidate or public release. The device listed in the technical brief is the LSI MegaRAID 9265, dual core RAID on Chip (RoC) controller. For more information about availability or pricing, please contact LSI Sales.


Seems like it's a work in progress.... I wonder if it could be possible to implement High Availability (HA) software RAID in Linux. It would most likely require some kind of a lock manager which could be implemented via network. Assuming both server would see the same storage I think it would be possible.
m
0
l
a c 297 G Storage
April 3, 2012 4:10:20 PM

The second-most expensive device in my company's datacenter is the the high-availability disk storage system that slices and dices hundreds of disks into large volumes, RAID arrays, and a partridge in a pear tree, and presents them to the various servers as storage space. The servers have their OS and some other local data on HDDs; the rest is served by this monster device. It's not an easy task.
m
0
l
April 10, 2012 1:18:25 AM

WyomingKnott said:
The second-most expensive device in my company's datacenter is the the high-availability disk storage system that slices and dices hundreds of disks into large volumes, RAID arrays, and a partridge in a pear tree, and presents them to the various servers as storage space. The servers have their OS and some other local data on HDDs; the rest is served by this monster device. It's not an easy task.

I don't think this is that hard to make. The only problem is getting the two controller to talk to each other. I was thinking even doing software RAID/multipath and just having one of the nodes have a write lock that could be manager via high availability tools in Linux.
m
0
l
!