Sign in with
Sign up | Sign in
Your question

Resonable RAID 5 init times for 5x2TB ICH10R volume??

Last response: in Storage
Share
Anonymous
a b G Storage
April 1, 2010 4:46:35 PM

Hello,
I have only used dedicated raid controllers before, mostly from HighPoint. Now I have a new motherboard with an Intel ICH10R controller on it.
I created a new volume using 5 2TB SATA 2 disks (3GB/s). What is a resonable expectation on the time it will take to initialize the volume?
(Running on a quadcore I7 @ 2.8 GHz)

I've been running the init for about 16 hours now, and it is only 16% done. The machine hasn't been doing anything else...
a c 415 G Storage
April 1, 2010 6:23:47 PM

I'm not familiar with that RAID controller, but if it's not doing the initializations in parallel then it sounds like you're getting transfer rates of a little under 30MByte/sec. That's probably pretty close to the mark if the RAID controller is doing a readback of each block to verify it.

You should be aware that your data isn't going to be particularly safe on that RAID array. If you loose one drive, the only way the controller can recover from the failure is to successfully read EVERY sector from EVERY one of the 4 other drives. Any hiccups with ANY of the sectors will render the array unrecoverable.

Unrecoverable error rates for most hard drives are around one per 10^14 bits read. That means there's as much as a 10% chance of getting an unrecoverable read error for each TB's worth of data you read.

When your RAID controller is trying to recover by reading 8GB worth of data from those other four drives, that means you have as much as an 80% chance that the data will be unrecoverable. Not good odds at all, particularly considering that with 5 drives you have 5X as much chance of a drive failure to start the whole recovery process off in the first place.

You really gotta ask yourself what the point of such a large RAID-5 array is...
Anonymous
a b G Storage
April 1, 2010 11:24:00 PM

First of all: thanks for a well thought out reply.

I've actually read a bit about the debate on raid 5 and large disks/volumes.

While my disks are still have 512 byte blocks, disk manufacturers seem to want to increase block sizes to 4k so that the error correction bits have "more to work on" and thus be more efficient. Of course, this could all be used for even larger densities, but perhaps some of it will go to increase the number of bits per error... :-)

My disks are rated at 10^15, so slightly better than 10^14. I know, it doesn't really make the problem you describe go away, but at least brightens the numbers somewhat.

Many new controllers in the "home enthusiast affordable" range (from Adaptec and Highpoint) seems to come with a measure of fault handling capabilities these days, i.e. be able to re-map bad blocks reported from raid members instead of just refusing to rebuild at all. Naturally, this will leave a hole in the filesystem, but filesystems are usually pretty good att saving the rest of the data.

Unfortunately, I don't know if the ICH10R solution I'm trying now (some kind of hardware/software mix I'm led to believe) has any such features. I might for that reason and for the other features like online reconfiguration etc, go with a separate raid controller anyway.

And as for the point of having a large array... why not? :-)
I had a 1 TB array in 2003 and thought it was *huge*. But as they say, "nature abhors a vaccum", that volume filled up pretty quickly.

The real reason is that I want to have a single volume is to avoid all the hassle I have in the old machine of shuffling data back and forth between several separate 1TB disks, just because they seem to fill up all the time.

A single JBOD or raid 0 volume would of course work, but the life expectancy of a JBOD or raid 0 drive is even worse than that for a raid 5.

I wish I could get my hands on Sun's zfs for Windows... oh happy thought!
I hear that it maybe is available on FreeBSD/NetBSD, so my old machine will probably get a makeover towards that solution instead.

Finally: do you have any thoughts on whether having an extra parity disk (raid 6) would help any? I mean, it is *another* disk that will maybe have to be checked/read/written? if there is a failure in one member...

a c 415 G Storage
April 1, 2010 11:37:35 PM

My point about "what's the point in having such a large RAID-5 array" was that if you have such a high likelihood of not being able to recover from a drive failure then what's the point of trying for redundancy in first place?

With drives spec'd at 1 error in 10^15 bits read you're a lot better off. I've started using WD Green drives specifically for that reason (I can't swear to this for all of their Green drives but the ones I've purchased are rated that way).

RAID 6 is definitely a lot safer. With RAID 6 you have to get unrecoverable errors in exactly the same sector on two different drives before you're hooped. That's very, very unlikely.
a b G Storage
April 3, 2010 1:41:51 PM

As an answer to your first question. I would say it is reasonable to expect a little more than three days for the initialization of the drives. My six 1TB array took two days to initialize if I remember correctly. I have had a couple times where intel flagged one of my drives as bad, but I dismissed it, and the array verified itself. During one of these instances, a second drive also got kicked out of the array, I dismissed it and everything was still alright. Yet another time, several drives were marked bad for some reason. My array was unrecoverable, however, I followed some recovery steps and was able to recreated the array and recover all the data.
Overall, having the array is nice, but my confidence level is not extremely high with it. I have taken some precautions and have put my "if I lose this stuff I'm going to throw up" stuff on an external drive. Best of luck to you. Be careful.
!