Raid Expansion

sirv

Distinguished
Feb 16, 2008
27
0
18,540
Hello all,

I'm running out of space on my data raid, so naturally I'm looking at several ways to expand the capacity. Unfortunately my funds are rather low now, so I can't really afford to try and see, hence this topic. I'd like you to run over what I've researched so far, see if there are any problems or misconceptions, how you agree or disagree.

My current setup uses a "Promise SuperTrak EX16350" (see details below), with 4x 320GB in 0+1 for my system drive and 6x 1TB raid 6 for my storage drive. Since my 640GB system drive is nearly full, and my 3.6TB storage drive has a mere 300GB left unused, and I was so silly to put the system drive on the raid card not the motherboard raid, simply adding a raid 6 of 2TB drives to copy it all over (before removing one or both raids) is not an option.

Promise SuperTrak EX16350 (16x SATA300 RAID, 128MB PCI-e x8)
http://it.promise.com/product/card_detail_ita.asp?pid=190
Storage capacity up to 16.0 terabytes (with sixteen 500GB drives x 2 controllers/ system)
Online capacity expansion and RAID level migration to add capacity --on the fly--as needed


As I see it, there are 3 options available to me:
1) Expand the raid 6 with additional 1TB drives. I'm still confused how the card could reliable redistribute striped&paritied data over one or more additional drives, but Promise apparently claims it can be done "--on the fly--", so this would be one option.
This would be cheapest, but smallest improvement and poor long-term value.
2) Expand the raid 6 with additional 2TB drives. Similar to the first option, but with the added benefit that in the future I could use option 3, but only have to replace the 6 original 1TB drives. Since you would effectively be using only half (or so) of the platter, I'm assuming there would be a performance hit - but at 90 spins per second, and it being used as a storage drive, this would probably be negligible.
This would be a little less than twice as expensive as first option, with smallest improvement, but decent long-term value potential.
3) Replace all 6 1TB drive with 2TB drives. Once that's done and the raid has been fully rebuilt, it seems entirely plausible to simply go into Windows' disk management and resize the partition to the new (roughly double) drive capacity.
This would be quite expensive - up to 6 times option 2, and roughly 11 times option 1, but would approximately double the available space - in one go - and provide great long-term value assuming option 2 works as well.

Of course all this is based on the assumption that:
- It really is that simple to just add another drive and have the raid card rebuild it somehow, hopefully safely.
- It really is that simple to just replace all drives with bigger ones and go into disk management to resize the partition
- Mixing different drives (perhaps even vastly different sizes) isn't all that bad

I think I'd prefer option 3 - it's a bigger investment - at least up front - but it keeps the drive counts low (great failure wise), doubles the capacity in one go (good for another year, perhaps two), and I'm not 'stuck' with even more 1TB drives. Other facts that factor into my decision:
- I'll probably be splitting my rig into a gaming PC and a storage server 'soon' (TM). To that end, the gaming PC would have a 120GB SSD and 2-4 1TB drives in mirror raid or 0+1. The storage server could then use 2 (or 4) 1TB drives for system/cache disks, thereby keeping the storage raid 6 as sequential as possible.
- The 320GB drives are getting pretty old. Not that I'm terribly worried about failure (they're in 0+1 plus I even have a spare), but these are 4 and 5 platter drives at 7200 RPM so the 1TB drives are faster, quieter, cooler, bigger and more energy efficient.
- My case is already one 3.5" slot short (one of the 320GB drives is loose - but the case is very stationary, at least). Hence a new case may be on the horizon, but I don't have it nor ordered it yet.

More info:
* HDD 320GB 7200RPM S-ATA300 Seagate 7200.10 16MB Ca --- 5x (4 in use; raid 0+1). Some of these broke within warranty (was it 3 or 4? two within a week, hence raid 6 instead of raid 5) so some are 5 platter and some 4 platter. At least I think so.
* Western Digital Caviar Green WD10EACS, 1TB --- 6x (6 in use; raid 6).
* Samsung EcoGreen F3EG, 2TB --- 0x. Looking at these to buy as replacements (or addition). Cheap, green (storage anyway), heard good things about the Samsung F# line.

Finally, "Storage capacity up to 16.0 terabytes (with sixteen 500GB drives x 2 controllers/ system)". What does this mean? Is this simply a re-mentioning that the/each raid controller card can support up to 16 drives? Or is this a statement that it can only handle 16.0 terabytes of storage (and would that be total, or per logical drive)? Clearly it's not a statement that 500GB drives are the biggest it can handle; I've already connected 6x 1TB drives. I've also read certain controllers may have problems with 2TB drives. Is this something I should look out for?

I hope my effort to be thorough hasn't created a wall of text - I've tried to be in-depth yet structured while mentioning all applicable details, leaving out unimportant stuff. Thanks in advance for your time, and your opinions.
 

sirv

Distinguished
Feb 16, 2008
27
0
18,540
Apparently I unwittingly made this a discussion rather than a question. Anyway, after receiving no replies I made an inquiry over at Promise who quickly answered most of my questions, and I thought I'd post them here.

He wasn't entirely clear, but from what I gathered you could in fact simply add more 1TB (or bigger) drives to it, and the raid array would become bigger. However, I've read in another topic (concerning an adaptec card, though) that rebuilding an array this size would take weeks or months and would not be suggested.

Furthermore, replacing the 1TB drives with 2TB drives would still have the same size logical disk in the raid card, and to resize this you would basically need to clear the entire array (losing all data you didn't back up).
Changing the geometry of the disk will not increase the size as the size is written to disks. You can replace the 1 TB disks with 2 TB Disks but that would mean you need to recreate the ld <editor's note: logical disk>. So this means you need to backup and restore the data. Unfortunately you cannot rebuild the disk with the new capacity as the capacity will be the size of the original disk not the new one.

It would seem my only long-term option (barring another raid card, assuming 10x 2TB drives) is to remove the 4x 320GB drives (and back that up, somewhere, somehow), at which point I'll have the 6x 1TB raid array active, with 10 empty slots. Using those slots, I create a 10x 2TB raid 6, copy all the data over, then do what I want with the 1TB drives.
 

sub mesa

Distinguished
I feel sorry you got no replies. I have to admit i kind of get discouranged to see a long first post. It's good to give a lot of information, but perhaps making a central question in bold text would help.

Either way, i can tell you that expanding RAID5s is not necessarily safe. It's a complex process and if your drives misbehave during the expansion, very bad things can happen and you might just lose all data.

I'm not sure if you're waiting to hear this, but isn't something like ZFS an option to you? It would solve the difference in disk size, as you can use both 1TB and 2TB disks and use the full capacity of both. In essence you would have:

6x 1TB ZFS RAID-Z (RAID5) or RAID-Z2 (RAID6)
6x 2TB ZFS RAID-Z (RAID5) or RAID-Z2 (RAID6)

combined in one pool, meaning all free space of both arrays is available in one unit. Any filesystems you create will share the free space, and striping (RAID0) will be performed between the arrays. So you could say this would be a RAID50 or RAID60.

If you like more info on ZFS i can write a lot, but i'm not sure if that's the way you want to go. You already have that SuperTrak controller; and this approach would mean you need a controller that serves plain disks. Using your existing controller is certainly possible, but it wouldn't be used for RAID but instead only handle single disks; thus its main capabilities as RAID XOR IOP processor wouldn't be utilized.

The benefit of ZFS approach is that you can use any controller, as long as it works. So you don't need all your disks to be on the same controller. You can use the onboard ports supplied by motherboard plus any other controller that is supported.

This approach would mean you build a NAS; not local storage. As i assume you would want to use Windows on your regular system. Anyway, let me know if you're interested.
 

sirv

Distinguished
Feb 16, 2008
27
0
18,540
From what I understand, ZFS is a software-based raid (or rather, a file system with software raid support), that's currently natively available in (Open)Solaris, has been fully implemented into FreeBSD, and may be available in working order in Linux, through FUSE.

The problem is, of course, that not only do I need the current 6x 1TB disk raid on my promise card, I also need the future 6-10x 2TB disk raid on the same card, and I need to be able to transfer data between them. It probably is possible to do this, but between finding working drivers and such - possibly having to work for NTFS support too - seems like a lot of work when I could just add the 2TB drives in an NTFS, copy it all over, and stick with an OS I know in and out.

Now to be clear - I see many possibilities and benefits from ZFS, it would (I think) even save data in case of raid controller - but I don't feel it's a fit for my case.

Anyway the techie over at Promise answered most of my questions, once I know 10x 2TB would work on that card I'll probably move ahead with that plan. Thanks for letting me know about ZFS, it was an interesting read.