Sign in with
Sign up | Sign in

Reply to this thread

Solved Forum question

Started by Eggz | | 18 answers
Which drive offers the best reliability and value for 4TB of RAID1 storage?
Hi Tom's People,

Here is the filter of drives I've narrowed my choices down to (click for link to PC Part Picker list).

I have a build almost where I want it, and now I am wrapping up some details to get my storage solution where it should be. My primary use is photography, but I definitely enjoy games. That's not relevant here, though. Losing my photo library would completely suck, and I want to setup a RAID1 to give good fault tolerance. I also back up off site, but I don't want to completely rely on that either, hence the RAID1.

Currently, I use a 2TB Barracuda for media storage. It's been great, but it's not RAID1. If you could please help me decide between which pair of 4TB drives to get, I would appreciate it. Refer to the link above for the drives.

Keep in mind that I've looked through the specs on each of the drives, so what will really help is highlighting why certain specs are more well suited for my particular use, rather than just saying that drive A has feature x whereas drive B has feature y.

I'm not brand loyal when it comes to HDD, but I've also had every WD drive I've ever owned fail on me in a relatively short time. I've also never had a Segate drive fail on me, and I have owned a lot of drives. As for Hitachi and Toshiba, I've never owned either.

Some questions I have are whether getting a non-desktop drive is worth is (e.g. NAS or Enterprise)? The main difference seems to be the amount of cache (64 mb vs 128), but I am not sure exactly how that effects media storage in RAID1 if at all.

Thank in advance!

-Eggz

  • By posting on this site, I confirm I am over 13 years of age and agree to abide by the site’s rules.

a c 971 G Storage
April 20, 2014 10:44:39 AM

When a raid1 rebuild fails its usually one of two reasons: the replacement drive is not good or the remaining drive (good drive) is now failing also.

A rebuild is a fairly stressful time for an HDD you see. So if a drive is on the verge of failing might get pushed over the edge. This is just one reason that no form of raid is a substitute for a having a backup.
a b G Storage
April 19, 2014 9:40:04 PM

Is it a Intel board? If so I know for sure that Intels chipsets in RAID mode fully support AHCI features and wont make a difference. That is how I have my system setup, in RAID, with my SSD as OS and 2 2TB Seagates in RAID0 (I prefer the performance).
April 19, 2014 8:22:04 PM

jimmysmitty said:
Yes you will be fine with non enterprise drives but I would stick to either WD Blacks or Seagate Barracudas (not ES or LP) as they will have better results.


Great! One more question about a motherboard setting if anyone knows. Right now, I have my SATA set to AHCI and will obviously have to set it to RAID when I set up the drives in RAID1. However, I like that AHCI allows my SSD to control the information flow, which I think goes away when I select RAID. is there a way to have my SSD continue to control itself but use my motherboard's RAID setting for the HDDs?
a b G Storage
April 18, 2014 2:02:41 PM

Eggz said:
jimmysmitty said:
The purpose of a RAID1 is always redundancy. It can operate with 1 drive but will be degraded. If it fails you can try a low level format on the new HDD and try again as the data will stay on the good drive. What it means to fail is that it was unable to add the new drive into the RAID array and replicate the data to the new drive.

Another option is removing the RAID and moving to AHCI and then trying to create the RAID again but that can cause data loss depending on the RAID controller you are using.


Cool, so that means I can just try again if the rebuild fails, right? If so, then I'll probably be fine with a good drive of less than enterprise quality. No need to be up and running 24/7. Overnight rebuild attempts are fine. Thanks!


Yes you will be fine with non enterprise drives but I would stick to either WD Blacks or Seagate Barracudas (not ES or LP) as they will have better results.
a b G Storage
April 18, 2014 9:38:25 AM

Eggz said:
jimmysmitty said:
The purpose of a RAID1 is always redundancy. It can operate with 1 drive but will be degraded. If it fails you can try a low level format on the new HDD and try again as the data will stay on the good drive. What it means to fail is that it was unable to add the new drive into the RAID array and replicate the data to the new drive.

Another option is removing the RAID and moving to AHCI and then trying to create the RAID again but that can cause data loss depending on the RAID controller you are using.


Cool, so that means I can just try again if the rebuild fails, right? If so, then I'll probably be fine with a good drive of less than enterprise quality. No need to be up and running 24/7. Overnight rebuild attempts are fine. Thanks!


Yes, then you'll be fine with non-enterprise drives.
April 18, 2014 5:44:55 AM

jimmysmitty said:
The purpose of a RAID1 is always redundancy. It can operate with 1 drive but will be degraded. If it fails you can try a low level format on the new HDD and try again as the data will stay on the good drive. What it means to fail is that it was unable to add the new drive into the RAID array and replicate the data to the new drive.

Another option is removing the RAID and moving to AHCI and then trying to create the RAID again but that can cause data loss depending on the RAID controller you are using.


Cool, so that means I can just try again if the rebuild fails, right? If so, then I'll probably be fine with a good drive of less than enterprise quality. No need to be up and running 24/7. Overnight rebuild attempts are fine. Thanks!
a b G Storage
April 17, 2014 8:20:10 PM

The purpose of a RAID1 is always redundancy. It can operate with 1 drive but will be degraded. If it fails you can try a low level format on the new HDD and try again as the data will stay on the good drive. What it means to fail is that it was unable to add the new drive into the RAID array and replicate the data to the new drive.

Another option is removing the RAID and moving to AHCI and then trying to create the RAID again but that can cause data loss depending on the RAID controller you are using.
April 17, 2014 7:43:11 PM

Hmm. So what happens when a RAID1 rebuild fails - other than more downtime while I screw around with the thing? Does that mean I just have to start it over, or does that mean I actually lose all of the data? I don't mind start the process over. I kind of expect it to be a pain in the ass, but since it happens not very often, I'll be willing to just suck it up. If that is the case, then maybe the Black seems okay for my use; otherwise, I might need to look into this UBER10^x statistic when shopping for drives.
a c 971 G Storage
April 17, 2014 4:11:43 PM

TyrOd said:

If you want to set up a RAID1 system with 4TB, NAS/Enterprise drives are required, because the base chance of failing a rebuild with a 4TB RAID1 is ~25%. With NAs/Enterprise drives it is 100X lower.


Actually the NAS drives are spec'd Identical to the desktop drives, with WD its non-recoverable bit errors being less that 1 in 10 to the 14th power. You dont see 10^16 unless you go to the enterprise class drives. This spec may be reflective of the platters themselves and not the quality of the hardware in the drive. Thats the only thing I can think of that would make a Black that much more expensive than a blue, green, or red.

Having 2 drives fail in two years is rather odd. I have many drives from many manufacturers and my last failure was two WD greens that had gotten submerged when my basement flooded about 3 yrs ago. Oddly the other 4 drives survived fine along with the rest of the server. Its all still up and running to this day.
<Here's hoping I didnt just jinx myself>
LoL

a b G Storage
April 17, 2014 2:53:48 PM

Eggz said:
Hey all. I appreciate the feedback so far. To start, though, I understand the difference between fault tolerance and backup. My RAID1 plan is only to up the fault tolerance, but thanks to those who pointed it out. I'm sure it will help other people who might come across this thread during research in the future.

As for the drive selection, up to now, I've been getting information that's at odds with some other information. First is the WD Black drives being used in RAID. I've heard of success with this, and Linus from NCIX has featured builds with Black RAID1s. He's given great advice in my time watching him - knows his stuff. But I've also seen places that Black drives aren't for RAID. That same advice is even in WD's materials on their drive lineup. But, as some here have pointed out, there are many types of RAID, so perhaps the warning against RAID with a Black drive is mainly for configurations other than RAID1 (e.g. 5, 6, or 0). This is still an open question for me.

Another thing is going too strictly by the classification of a drive. I mean, I am sure there are some cheap NAS drives that are less reliable than very nice consumer drives. And I am also pretty sure there is a diminishing return on paying for a server class drive. Isn't the whole point of RAID1 to increase fault tolerance? That makes is hard for me to justify paying a large premium for an enterprise drive. The whole reason behind enterprise drives is to be more certain your drive will work. But mathematically, running a two-drive configuration like RIAD1 should square the chance of fault, right? For example, if drive A has a 2% fault chance, there is only a 0.04% chance that two of those drives will fail. That particular example increases reliability by 50-times, which is more than the difference between a nice normal drive and an enterprise drive. For non-raid, I'd totally think server class is worth the reliability boost, but RAID1 seems to offset that.

As for cache, I don't really care that much about speed. My photo library runs on a database format with previews on a very fast SSD, so the speed of fetching originals from the media storage won't matter. That's good because it will decrease cost by going with a lower cache drive.

One thing I totally haven't read up on is the chance that a RAID1 rebuild will fail. What variables go into figuring that out?

While I don't yet have enough to arrive at a choice, I am able to cross certain things off the list. Particularly, I will avoid WD's green drives, as well as all "desktop" or "general use" drives.

Thanks again, and I hope to see the feedback coming. It's been very helpful.


A Black drive is in general les likely to fail than a Red drive with 3 year warranty, but the Red drive is less likely to cause downtime if 1 drive fails and you swap in another drive to replace it.

The reason for this is that the red series has an UBER rating 100x higher than the Black series.

The UBER is the Unrecoverable Bit Error Rate or the number of bits that can be read on average before thre is a read error stopping the rebuild process.
The UBER for consumer drives is 10^-14 which is about 12TB. that means that rebuilding a 4TB volume on a WD Black drive has a ~25% probability of encountering an error and halting the rebuild process and creating more downtime.

The Red series and above have an UBER of 10^-16 making it 100x less likely to fail during a rebuild.

Enterprise drives give you the best of both worlds at a premium cost.

IMO it is better to have to replace the Red drives somewhat more often, than to encounter a higher chance of downtime when a black drive fails.

See all answers