RAID 1 Rebuild Slow, Rebuild Changes Only?

mmseng

Distinguished
Apr 22, 2008
5
0
18,510
Ok, I might get the hairy eyeball for this one, but here goes...

I recently purchased and installed a new SiliconImage SATA3.0/eSATA RAID controller (Model 3124) and two identical Western Digital 1Tb External MyBook eSata-capable drives (Model WDH1CS10000). After a bit of fuss and confusion I got everything working. Both drives are in a RAID 1 setup. Apparently this RAID disk didn't like being converted to a dynamic disk and subsequently formatted, because the format would always fail. So I formatted it as a basic disk, and it worked fine. I'm kind of scared to try converting it to dynamic now, but I don't really need to anyway, and that's a whole 'nother unimportant story.

Anyway, just for some testing I made a quick txt file on the disk, unplugged one of the drives (which windows was fine with), the RAID management software from SiliconImage's site (SataRAID5 Array Manager) detected the change. I then added a second file to the disk and plugged the disconnected drive back in. The array then starting rebuilding itself automatically... but it was estimated to take about 12 hours to complete the rebuild... yikes!

Now I know I should have seen this coming (and this is where I brace for the hairy eyeball), but damn I don't want to wait 12 hours for my RAID 1 array to rebuild, only to synchronize a single 1Kb text file. I've had an internal 300Gb RAID 1 on my SATA motherboard for a long time now it always seemed to rebuild very quickly, like on the order of minutes, or at least as long as it took to load up Windows. I guess can't be sure that it hasn't just been rebuilding in the background unbeknownst to me or something like that, and I haven't actually done it that much, but it sure seemed like plugging in one of the array's missing drives after it's been disconnected for a while, having missed all the updated and new files being added to the array without it, didn't take near as much time as what I'm trying to do now.

At any rate, I was just wondering if maybe there's another way to do this, or if there's a way to force a RAID 1 array to rebuild only changed data for the purpose of saving lots of rebuild time. I realize that the array itself is on a whole different level that probably doesn't even recognize what "files" are, and I'm guessing all the RAID controller does is synchronize every bit on the drives. But I was wondering and hoping that there might be a way to avoid synchronizing all of the unused 0's. At the moment literally all of the 1Tb RAID 1 array is unused, and it seems awful inefficient to be rebuilding comparatively minuscule changes (like 1Kb changes, in my testing) by synchronizing the entire capacity of the drive.

My guess is that I'll have to wait for another generation of RAID controllers to be developed that can actually recognize NTFS formatting so that it can up its efficiency in this area, but I wanted to appeal to the knowledgeable users of tomshardware for any possible inklings of a solution or efficiency booster.

Yeah, yeah, I know, just use them as separate drives an copy over the changes manually or something. I might just end up doing that, but what do you people think eh?

Thanks for listening,
== Matt

P.S. It seems like this might start becoming a more common issue what with the availability of such huge drives nowadays.
 

rozar

Distinguished
Jun 7, 2007
345
0
18,780
I would say that 12 hours for a SI controller on a 1 TB RAID 1 array is not hard to believe. So in this case I would say you are getting what you are paying for. If you want better performance I would say buy a RAID controller with a processor on it. Of course those do cost more money. Also all controllers will treat a drive added back into an array as though it were blank (when done like you did it, where there would be different time stamps on the 2 drives) and do a complete rebuild.
 

mmseng

Distinguished
Apr 22, 2008
5
0
18,510
Yeah, that's what I figured. Guess I'll stick to manual (or scripted) copy and pasting on two separate drives then. At least until the drives fills up enough to where that operation starts to take close to 12 hours :|