Raid 5 Storage using 5 ST8000DM004 Drives ( 8TB ), DQ35JOE Motherboard ( Raid Support from BIOS )

Apr 10, 2018
4
0
10
Hi,
I have 5 ST8000DM004 spare Drives and an old DQ35JOE Motherboard that works perfectly fine and supports Raid Configuration.
If a Hard Drive Fails is the probability of me losing all the data after rebuild high? could someone please give an estimate? can these models be trusted for raid 5?

PC Specs:
Intel Core 2 Quad Q9450 2.66GHz
8GB DDR2 800Mhz ( 4 x 2GB)
120GB SSD ( Set on a PCI alone to avoid an issues on raid configuration )
DQ35JOE Motherboard
EVGA 450 BT, 80+ Bronze 450W

The Drive will be used to copy and keep Media Files ( Blu-ray, Music, Family Picture ), So like a Media Server.

thank you for your time
 
Solution
After looking at the point which Solandri brings up, concerning URE of spinning platter hard disks, I'm going to revise my opinion that, even if a RAID 5 array of your 5x 8 TB drives works fine, I wouldn't do that. I would use a RAID 1, 10 or just drives as singletons being backed up.

The 5x10^15 URE rate that Solandri points out is actually low grade enterprise reliability, so a pretty reasonable rating for a consumer drive. But he also makes a good point that, even with that reliability level, the chances of a failed rebuild are high enough with RAID 5 that, there is no good reason to ever go with RAID 5 rather than a different RAID method. RAID 5 ends up being both slower and less reliable than something like RAID 1 or 10. It may...

USAFRet

Titan
Moderator
RAID 5 is specifically for combatting the fail of a single drive.
I've tested this specifically with my NAS box. 4 x drives in RAID 5. Data survives through a single drive removal or fail.

However...you still need a backup of the actual data. RAID of any type does not protect data, only drive fail.
The user and the OS sees a single volume. Any data corruption, accidental deletion, virus, malware, ransomware...that data is gone.
 
Apr 10, 2018
4
0
10


thank you sir for the clarification, not worried on the data corruption, virus or deletion, the PC is on a different switch powered from a UPS and no internet connection, just worried if the rebuild procedure " When 1 drive fails " has a small or high probability to corrupt the data or cause another drive to fail which will cause all data to be lost, the total is 40TB - 8TB ( 5 x 8TB ), is it ok to use this model? ST8000DM004
 

USAFRet

Titan
Moderator


When I went from 4 x 3TB drives to 4 x 4TB drives, approx 6TB of data...the rebuild process took about 6 hours for each drive swap.
I swapped them 1 by 1, just to test the functionality of doing that.

And for this.."another error and all is lost"...
That what the above mentioned backup is for. Just in case.
 
I wouldn't recommend using the shingled magnetic recording drives in any application that relies on high random writes. If you can guarantee that your RAID controller is only ever going to perform sequential writes, or low volume random writes, the drives may perform adequately, but otherwise the performance may be both disastrous, and cause the drives premature wear due to excessive misuse.
 
Apr 10, 2018
4
0
10


thank you for your answer, wont edit or use any Virtual Machines on it, The Drive will be used to copy and keep Media Files ( Blu-ray, Music, Family Picture ), So like a Media Server.

watch them from the network using debian
 
That model is rated at one unrecoverable read error per 10^15 bits read.

https://www.seagate.com/www-content/product-content/barracuda-fam/barracuda-new/en-us/docs/100805918d.pdf

That works out to an error every 113.7 TB on average. If your 5x8TB array (32 TB) is half full (16 TB), then the odds of an error during rebuild which causes the array to fail is 16/113.7 = 1 in 7, or 14%.

It should be noted that this is an order of magnitude. If you calculate the error rate as ranging from 5x10^14 to 5x10^15, then that's one error every 56.85 to 568.5 TB, or 2.8% to 28% in the above hypothetical rebuild. (The actual failure rate will be lower due to quirk of probability. There's a chance of multiple failures, which increases the chance of there being no failures. If the average failure rate is 1 bit in 1^15, and you read 1^15 bits, then the chances of having 1 or more failures is 1/e, or 36.7%. Not 100%.)

So yeah, you definitely need to keep a backup. Or if downtime is not allowable (the point of RAID), switch to RAID 10 or RAID 6. Personally I switched to ZFS. When it encounters an error, it just marks the single block as unrecoverable, not the entire drive. That is, I lose a single file, not the entire array as with RAID.
 
The problem isn't just in how you use the RAID array, but also in how the RAID controller uses the underlying drives. Seagate's SMR drives have a special buffer of undisclosed size so the drive can accept writes at full speed and mask the write speed deficiencies of the SMR section of the drive, and once this buffer is exceeded, performance of the drives can drop off, and generally just varies wildly. There's no way to guarantee that different revisions of the SMR drives don't use different sized buffers. Depending on the RAID controller, this might even result in red flags being raised as to the health of the drives, which could lead to the array having strange or unwanted behavior. If a single drive disconnects from the array, the array will certainly continue to function, but with severely degraded performance, which is not exactly something you want with RAID 5, as you're already sacrificing performance in such a RAID configuration.

I think, there would be zero issues in creating the array you hope to use, and using it short term, with data you otherwise have backed up, and seeing whether the array performs to your needs. If so, I would say, it's probably fine. However, I would also make it a point to do as USAFRet has done and test the rebuild ability of the array. Write some Blu ray images to your array, then force a rebuild. See what happens. It'll either go well or it won't.

I'm not against the SMR drives, in fact I use plenty of SMR drives personally for the cost to capacity ratio, and haven't had any issues with them, but I make it a point to respect their pretty straightforward limitations.

It's been years since I used a RAID 5 array. Initially, I liked the low overhead of storage space for the data safety, but the performance overhead eventually made me rethink my original decision. I was using the best Adaptec RAID 5 card I could at the time, so it wasn't an issue of building the array using cheap hardware. Since that time, nothing really beats the ease of use of single drives and simply duplicating data that you need to back up. I would certainly consider a RAID 1 or 10 if I had the need for higher up time or performance, but I fail to see the need when it comes to media storage in my own household. There's nothing I stand to lose that I can't get back from the original sources.
 
After looking at the point which Solandri brings up, concerning URE of spinning platter hard disks, I'm going to revise my opinion that, even if a RAID 5 array of your 5x 8 TB drives works fine, I wouldn't do that. I would use a RAID 1, 10 or just drives as singletons being backed up.

The 5x10^15 URE rate that Solandri points out is actually low grade enterprise reliability, so a pretty reasonable rating for a consumer drive. But he also makes a good point that, even with that reliability level, the chances of a failed rebuild are high enough with RAID 5 that, there is no good reason to ever go with RAID 5 rather than a different RAID method. RAID 5 ends up being both slower and less reliable than something like RAID 1 or 10. It may give a small amount more free space, but the trade-offs are hardly worth it.
 
Solution