Which drive offers the best reliability and value for 4TB of RAID1 storage?

Eggz

Distinguished
Hi Tom's People,

Here is the filter of drives I've narrowed my choices down to (click for link to PC Part Picker list).

I have a build almost where I want it, and now I am wrapping up some details to get my storage solution where it should be. My primary use is photography, but I definitely enjoy games. That's not relevant here, though. Losing my photo library would completely suck, and I want to setup a RAID1 to give good fault tolerance. I also back up off site, but I don't want to completely rely on that either, hence the RAID1.

Currently, I use a 2TB Barracuda for media storage. It's been great, but it's not RAID1. If you could please help me decide between which pair of 4TB drives to get, I would appreciate it. Refer to the link above for the drives.

Keep in mind that I've looked through the specs on each of the drives, so what will really help is highlighting why certain specs are more well suited for my particular use, rather than just saying that drive A has feature x whereas drive B has feature y.

I'm not brand loyal when it comes to HDD, but I've also had every WD drive I've ever owned fail on me in a relatively short time. I've also never had a Segate drive fail on me, and I have owned a lot of drives. As for Hitachi and Toshiba, I've never owned either.

Some questions I have are whether getting a non-desktop drive is worth is (e.g. NAS or Enterprise)? The main difference seems to be the amount of cache (64 mb vs 128), but I am not sure exactly how that effects media storage in RAID1 if at all.

Thank in advance!

-Eggz

 
Solution
Well the difference in reliability from brand to brand is going to be insignificant compared to making a backup copy.
RAID1 is not really a backup, it is to maximize uptime. So if you're primary concern is data protection and not up time, then I would strongly suggest using the second drive as a local backup.

If you want to set up a RAID1 system with 4TB, NAS/Enterprise drives are required, because the base chance of failing a rebuild with a 4TB RAID1 is ~25%. With NAs/Enterprise drives it is 100X lower.

The difference between NAS and Enterprise drives in terms of failure rates is going to be far outweighed by the extra cost of enterprise drives.

The difference in cache, etc... is only going to effect performance.


Summary: RAID1 is...
I would get WD Black drives if reliability is a primary consideration. I believe they may be the only consumer hard drives to have retained a five year warranty since the Thailand floods. I have three or four sets of these (in RAID1 pairs) in various systems I've built over the last few years, and none of them have failed or even hiccuped. I did get one DOA a year or so ago, but that can happen with anything and could have been the result of shipping damage.
 

TyrOd

Honorable
Aug 16, 2013
527
0
11,160
Well the difference in reliability from brand to brand is going to be insignificant compared to making a backup copy.
RAID1 is not really a backup, it is to maximize uptime. So if you're primary concern is data protection and not up time, then I would strongly suggest using the second drive as a local backup.

If you want to set up a RAID1 system with 4TB, NAS/Enterprise drives are required, because the base chance of failing a rebuild with a 4TB RAID1 is ~25%. With NAs/Enterprise drives it is 100X lower.

The difference between NAS and Enterprise drives in terms of failure rates is going to be far outweighed by the extra cost of enterprise drives.

The difference in cache, etc... is only going to effect performance.


Summary: RAID1 is "fault tolerance" to maximize uptime, not for data recovery. You'll need a local backup regardless of which drive you choose. If you use RAID1, you'll need to use a NAS drive at least to avoid problems with rebuilds.
Any inexpensive NAS drive will do.
 
Solution

TyrOd

Honorable
Aug 16, 2013
527
0
11,160


This is true. The warranty is going to most closely reflect the failure rates. It is still not a substitute for a proper backup, so you've got to decide if the extra cost of a Black drive is gong to make up for the cost of replacing a green drive.

I till wouldn't use a black drive in RAID1, because the mechanical stability won't change the fact that it is has a consumer level UBE rate. A WD Red drive would be better for the RAID1 setup with another drive as a dedicated local backup.
 
RAID0 and RAID5 pretty much require "enterprise-class" drives with Time Limited Error Recovery (TLER), however RAID1 is a lot more forgiving, since data from multiple drives is not required to satisfy I/O requests.
It is also worth noting that I've got a pair of Seagate "LP" drives running in a RAID1 as well that have never hiccuped.
It is also true that RAID1 is not a substitute for backups. While it protects against [some] hardware failures, it does nothing for accidental deletions or overwrites, viruses, or other causes of data loss.
 


Per WD, you never want to use WD Greens or Blues in RAID, especially RAID 1/5. I have actually encountered this with a system with 8 WD Greens and they would randomly disappear from the RAID array. Even in RAID 1 I had them do that.

I personally stick with Seagate Barracudas (not ES or LP) and have never had one fail on me personally. I had 2 120GB SATA gen 1 Seagates that I sold for $20 bucks each when they were 8 years old, still good as I tested them with 3 or 4 HDD utilities.

When I worked in a PC repair shop I saw plenty from every company but more from Toshiba and Hitachi than WD or Seagate.

As for the cache size, it will help keep read/writes a bit faster but not enough to warrant the increase in cost. I say if you have had luck with Seagates keep using their 4TB HDDs. I have 2 of the 2TB in a RAID0 right now and they have been for about 2 years now and are pretty happy.
 

TyrOd

Honorable
Aug 16, 2013
527
0
11,160


It's not really TLER that's the issue, it's that the UBER for consumer drives is 10e14 while for NAS/Enterprise drives it is 10e16. That translates to about 12TB and means that the chance of failing a rebuild from a 4TB drive in RAID1 is ~25% with consumer drives.
That totally defeats the purpose of RAID1 is you have a 1:4 chance that the rebuild will fail and you'll need to restore from backup anyway and set up the array again from scratch.
 
Hmmm, I've never had to rebuild a RAID1, ever. I do still recall weekends wasted rebuilding RAID5 arrays that used drives selected by the boss' boss. They were NOT TLER, and tended to lose a member almost as soon as they were stressed.
 

Eggz

Distinguished
Hey all. I appreciate the feedback so far. To start, though, I understand the difference between fault tolerance and backup. My RAID1 plan is only to up the fault tolerance, but thanks to those who pointed it out. I'm sure it will help other people who might come across this thread during research in the future.

As for the drive selection, up to now, I've been getting information that's at odds with some other information. First is the WD Black drives being used in RAID. I've heard of success with this, and Linus from NCIX has featured builds with Black RAID1s. He's given great advice in my time watching him - knows his stuff. But I've also seen places that Black drives aren't for RAID. That same advice is even in WD's materials on their drive lineup. But, as some here have pointed out, there are many types of RAID, so perhaps the warning against RAID with a Black drive is mainly for configurations other than RAID1 (e.g. 5, 6, or 0). This is still an open question for me.

Another thing is going too strictly by the classification of a drive. I mean, I am sure there are some cheap NAS drives that are less reliable than very nice consumer drives. And I am also pretty sure there is a diminishing return on paying for a server class drive. Isn't the whole point of RAID1 to increase fault tolerance? That makes is hard for me to justify paying a large premium for an enterprise drive. The whole reason behind enterprise drives is to be more certain your drive will work. But mathematically, running a two-drive configuration like RIAD1 should square the chance of fault, right? For example, if drive A has a 2% fault chance, there is only a 0.04% chance that two of those drives will fail. That particular example increases reliability by 50-times, which is more than the difference between a nice normal drive and an enterprise drive. For non-raid, I'd totally think server class is worth the reliability boost, but RAID1 seems to offset that.

As for cache, I don't really care that much about speed. My photo library runs on a database format with previews on a very fast SSD, so the speed of fetching originals from the media storage won't matter. That's good because it will decrease cost by going with a lower cache drive.

One thing I totally haven't read up on is the chance that a RAID1 rebuild will fail. What variables go into figuring that out?

While I don't yet have enough to arrive at a choice, I am able to cross certain things off the list. Particularly, I will avoid WD's green drives, as well as all "desktop" or "general use" drives.

Thanks again, and I hope to see the feedback coming. It's been very helpful.
 

TyrOd

Honorable
Aug 16, 2013
527
0
11,160


A Black drive is in general les likely to fail than a Red drive with 3 year warranty, but the Red drive is less likely to cause downtime if 1 drive fails and you swap in another drive to replace it.

The reason for this is that the red series has an UBER rating 100x higher than the Black series.

The UBER is the Unrecoverable Bit Error Rate or the number of bits that can be read on average before thre is a read error stopping the rebuild process.
The UBER for consumer drives is 10^-14 which is about 12TB. that means that rebuilding a 4TB volume on a WD Black drive has a ~25% probability of encountering an error and halting the rebuild process and creating more downtime.

The Red series and above have an UBER of 10^-16 making it 100x less likely to fail during a rebuild.

Enterprise drives give you the best of both worlds at a premium cost.

IMO it is better to have to replace the Red drives somewhat more often, than to encounter a higher chance of downtime when a black drive fails.

 

popatim

Titan
Moderator


Actually the NAS drives are spec'd Identical to the desktop drives, with WD its non-recoverable bit errors being less that 1 in 10 to the 14th power. You dont see 10^16 unless you go to the enterprise class drives. This spec may be reflective of the platters themselves and not the quality of the hardware in the drive. Thats the only thing I can think of that would make a Black that much more expensive than a blue, green, or red.

Having 2 drives fail in two years is rather odd. I have many drives from many manufacturers and my last failure was two WD greens that had gotten submerged when my basement flooded about 3 yrs ago. Oddly the other 4 drives survived fine along with the rest of the server. Its all still up and running to this day.
<Here's hoping I didnt just jinx myself>
LoL

 

Eggz

Distinguished
Hmm. So what happens when a RAID1 rebuild fails - other than more downtime while I screw around with the thing? Does that mean I just have to start it over, or does that mean I actually lose all of the data? I don't mind start the process over. I kind of expect it to be a pain in the ass, but since it happens not very often, I'll be willing to just suck it up. If that is the case, then maybe the Black seems okay for my use; otherwise, I might need to look into this UBER10^x statistic when shopping for drives.
 
The purpose of a RAID1 is always redundancy. It can operate with 1 drive but will be degraded. If it fails you can try a low level format on the new HDD and try again as the data will stay on the good drive. What it means to fail is that it was unable to add the new drive into the RAID array and replicate the data to the new drive.

Another option is removing the RAID and moving to AHCI and then trying to create the RAID again but that can cause data loss depending on the RAID controller you are using.
 

Eggz

Distinguished


Cool, so that means I can just try again if the rebuild fails, right? If so, then I'll probably be fine with a good drive of less than enterprise quality. No need to be up and running 24/7. Overnight rebuild attempts are fine. Thanks!
 

TyrOd

Honorable
Aug 16, 2013
527
0
11,160


Yes, then you'll be fine with non-enterprise drives.
 


Yes you will be fine with non enterprise drives but I would stick to either WD Blacks or Seagate Barracudas (not ES or LP) as they will have better results.
 

Eggz

Distinguished


Great! One more question about a motherboard setting if anyone knows. Right now, I have my SATA set to AHCI and will obviously have to set it to RAID when I set up the drives in RAID1. However, I like that AHCI allows my SSD to control the information flow, which I think goes away when I select RAID. is there a way to have my SSD continue to control itself but use my motherboard's RAID setting for the HDDs?
 

popatim

Titan
Moderator
When a raid1 rebuild fails its usually one of two reasons: the replacement drive is not good or the remaining drive (good drive) is now failing also.

A rebuild is a fairly stressful time for an HDD you see. So if a drive is on the verge of failing might get pushed over the edge. This is just one reason that no form of raid is a substitute for a having a backup.