Sign in with
Sign up | Sign in
Your question

Raid 1 Going Critical

Last response: in Storage
Share
January 5, 2012 5:05:03 PM

Hello All,

I have an HP e9220y as seen in the link below:

http://h10025.www1.hp.com/ewfrf/wc/document?docname=c01...


I used the integrated AMD RAID controller to set up a simple RAID 1. I am using the following 2 hard drives:

Seatgate ST31000340AS 7200 rpm3GB/s 32MB cache on SATA port 1
Hitachi HDS721010CLA332 7200 rpm 3GB/s 32MB cach on SATA port 3

I have successfully set up the RAID a couple times now and even removed and formatted a drive so I could add it back to the RAID to ensure it would rebuild. Everything rebuilt fine. However both times my RAID goes critical within 2 weeks. The AMD RAID software can see both physical drives but it says the logical drive is critical. SMART tests show both drives to be fine.

I am wondering if the problems are happening because of the 2 different brands of drives but as you can see, the drive statistics are the same.

Any thoughts, input, or suggestions would be helpful.

More about : raid critical

a b G Storage
January 5, 2012 5:22:14 PM

You shouldn't be mirroring different makes/models of drives. Lots of controllers will run into issues with it. If both drives SMART check okay, but fail to maintain a mirrored volume, then i'm going to bet the reason is that they're mismatched, or one of the drives is going beyond a timeout threshold that's acceptable to the controller during an R/W error on a specific drive. While the drive can handle it's own internal RW errors most of the time, it sometimes goes beyond a timeout threshold allowed by the controller, and it will cause the drive to drop from the array.
m
0
l
January 5, 2012 6:45:08 PM

Different drives in an array is a really bad thing. It's like trying to use 2 makes of similarly performing engines and 2 similar yet different computers in the same car. Both sets might have the same output but they do not work the same internally. You could probably get them synced together to a point but eventually they'd destroy each other or something attached to them since they just aren't made to work with each other.
m
0
l
Related resources
January 5, 2012 7:00:02 PM

Different Makes\Models generally doesn't affect it that badly, If you stick a slow drive with a fast drive it will only run as fast as your slow drive.

If they are both going critical all the time, you have a drive that either has some dodgy sectors, or a drive that's response/access time is out side the respectable window for the raid controller.

And I am going to say it's the Seagate I have had 2 of those exact model (both my seagate's pass every test i throw at them too) that are completely useless in raid arrays, they repeatedly take too long to access some parts of the disk and go critical, replaced them both with hitachi's never had a problem again.
m
0
l
a b G Storage
January 5, 2012 10:35:14 PM

kitsunestarwind said:
Different Makes\Models generally doesn't affect it that badly, If you stick a slow drive with a fast drive it will only run as fast as your slow drive.

If they are both going critical all the time, you have a drive that either has some dodgy sectors, or a drive that's response/access time is out side the respectable window for the raid controller.

And I am going to say it's the Seagate I have had 2 of those exact model (both my seagate's pass every test i throw at them too) that are completely useless in raid arrays, they repeatedly take too long to access some parts of the disk and go critical, replaced them both with hitachi's never had a problem again.




I'll agree with you, but i've also noticed Hitachi drives do this. When the drives attempt a sector recovery operation, they can sometimes take 5-15 seconds do to this. Most RAID controllers will freak out when a drive fails to respond for that period of time, and drop it from the array. In a consumer grade system that's not running RAID, it's not a big deal. You might notice a little hesitation if you're trying to do something, but it's generally acceptable. WD Green series drives seem to be the worst about this. That's one of the things you gain by getting RAID-rated drives (Like the RE3 series from Western Digital, or any enterprise SATA/SAS/SCSI drives) is they're designed for fast recovery operations during read errors, and won't cause this issue. Consumer grade drives don't adhere to this standard. While consumer grade drives will generally work in RAID when they're in matched sets, there's no guarantees. (especially with WD Green drives from what i've seen). Mismatching drives is just asking for problems though, and i'm willing to bet that's a lot of your problem. You can sometimes get away with mismatch drives in JBOD, but i'd still recommend avoiding mixed drives.
m
0
l
January 5, 2012 10:45:23 PM

Wow, good to know. I heard that it is always best to use the same model of drives but I didn't realize that it was THIS important. Damn...I can't afford another Terabyte drive now with the high prices. Looks like I'll just have to wait and set it up down the road.

Additionally, I apologize for all of the redundant postings. The website kept giving me errors everytime I went to post so I kept trying. Apparently I posted this thread multiple times. I can't figure how to delete the others either as they dont show up in my profile.
m
0
l
January 6, 2012 3:55:13 PM

Sounds like standard hard drives being the same model or not just aren't good for RAID applications? My main goal is backing up my data perfectly so I have a failsafe should my hard drive fail which has happened to me before. Am I better off manually cloning my drive once a week? Is setting up a RAID 1 even worth it or is it just going to cause me more problems?
m
0
l
a b G Storage
January 6, 2012 5:37:59 PM

Lots of people use consumer grade drives in RAID arrays and 99% of the time it works fine. I just know certain types of drives tend to cause issues. WD Greens are just one that comes to mind that will. Just make sure they're matched, and you're usually just fine.
m
0
l
!