Hi Everyone,
I hope this forum is still pretty active. I have an issue that I've never run into before.
I have 5 2TB drives in RAID 5 and it has worked flawlessly over the 3-4 years I've had it. Drives in the past have failed and I simply swapped it out with the same size and model number as before. The array resynced no problem.
On 8/31/15 the drive in bay 5 failed. No big deal! I had to order a new 2TB drive because the replacement I had on standby was dead.
I have Seagates, model ST2000DM001, but I got a slightly updated model when I setup an RMA for the failed drive, most likely due to the fact they didn't have the exact model on hand. The replacement has the same specs from what I can tell.
While waiting for a replacement from Seagate to come in I ordered and installed a WD20EFRX to get me over the hump. The plan was to keep the replaced Seagate as a backup for when another drive fails down the road.
The raid started to rebuild normally. After a few days it hit about 70% and the array went completely died. Screenshot here.
I received a notification in my email: The RAID array is inactive
All shares were offline and the drives showed as inactive. After checking the logs I didn't see where any other drive failed, so it couldn't be possible the entire data set was blown away. I thought "It must be a glitch in the OS, right?"
I rebooted the NAS and it came back up normally. All the drives online, shares are back and all the data is there. The sync restarted on drive 5. All is good right?
After waiting another few days the same exact thing happened!
Thinking the replacement WD drive I installed was bad I took it out and replaced it with the Seagate replacement and let it go.
Waited another few days and the same thing happened again. It seems to happen right at 69.3% complete. I was watching it this morning when it happened.
So it appears the raid fails to rebuild at or around the same place with drives of the same capacity and speed. What could possibly be the issue?
I could buy another ST2000DM001 which will fail after a year and be out of warranty by then and see if that makes a difference, but shouldn't I be able to mix and match different model numbers and brands as long as the drive size is the same? Any ideas/suggestions would be awesome and greatly appreciated.
I hope this forum is still pretty active. I have an issue that I've never run into before.
I have 5 2TB drives in RAID 5 and it has worked flawlessly over the 3-4 years I've had it. Drives in the past have failed and I simply swapped it out with the same size and model number as before. The array resynced no problem.
On 8/31/15 the drive in bay 5 failed. No big deal! I had to order a new 2TB drive because the replacement I had on standby was dead.
I have Seagates, model ST2000DM001, but I got a slightly updated model when I setup an RMA for the failed drive, most likely due to the fact they didn't have the exact model on hand. The replacement has the same specs from what I can tell.
While waiting for a replacement from Seagate to come in I ordered and installed a WD20EFRX to get me over the hump. The plan was to keep the replaced Seagate as a backup for when another drive fails down the road.
The raid started to rebuild normally. After a few days it hit about 70% and the array went completely died. Screenshot here.
I received a notification in my email: The RAID array is inactive
All shares were offline and the drives showed as inactive. After checking the logs I didn't see where any other drive failed, so it couldn't be possible the entire data set was blown away. I thought "It must be a glitch in the OS, right?"
I rebooted the NAS and it came back up normally. All the drives online, shares are back and all the data is there. The sync restarted on drive 5. All is good right?
After waiting another few days the same exact thing happened!
Thinking the replacement WD drive I installed was bad I took it out and replaced it with the Seagate replacement and let it go.
Waited another few days and the same thing happened again. It seems to happen right at 69.3% complete. I was watching it this morning when it happened.
So it appears the raid fails to rebuild at or around the same place with drives of the same capacity and speed. What could possibly be the issue?
I could buy another ST2000DM001 which will fail after a year and be out of warranty by then and see if that makes a difference, but shouldn't I be able to mix and match different model numbers and brands as long as the drive size is the same? Any ideas/suggestions would be awesome and greatly appreciated.