I am currently using 8 green-rated drives in two NAS (2x Netgear ReadyNas Ultra 4 Pro) and just realized my drives have very high "load cycle count", especially Western Digital ones
I remember back 1 or 2 years ago i've read articles on internet saying WD wasn't recommended to use in a NAS with RAID... I've also read that you had to change something in the firmware to solve the problem with high load cycle counts.
I've never had problems with my drives but i'm still worried about the "load cycle count" quickly getting over the limit the HD is designed for...
Here's the SMART test result on some of my Green drives... Doesn't looks normal
Samsungs and Seagates are doing fine doesn't seems to be concerned by the problem : For example, one of my seagate 2tb has been running for 1661 hours and only has a "load cycle count" of 63... other seagate drive has similar results
One of my samsung 2tb has a "load cycle count" of 248 for 46 hours
But worst of all are definatly Western Digital Caviar Green drives:
WD 3TB Green #1
71315 load cycle count for 893 hours of use WD 3TB Green #2
66828 load cycle count for 800 hours of use WD 3TB Green #3
26181 load cycle count for 554 hours of use WD 3TB Green #4
14170 load cycle count for 466 hours of use WD 2TB Green
16137 load cycle count for 2102 hours of use
Oh, and no, don't tell me to buy WD Enterprise drives certified for RAID that cost $500 for each 2tb drive -_-
The load cycle count is how many times the heads were parked and activated again. The drives are rated for something 300k over the life of the drive. The problem with those drives is that the heads park themselves with something like 8 seconds of inactivity to conserve power and therefor have to constantly go between active and inactive during moderate usage. Check out the link below, it provides a utility to modify the idle time required to park the heads. I believe they suggest 5 minutes but that would really depend on your usage patterns. I have 4 1TB Greens RAID5 in a NAS and I've been thinking about using the utility for a little while now, but haven't since I have pull the NAS apart (not hotswap) and attach them individually to a computer to update them. I think the safest way to do it it is to take 1 drive at a time, update it, put it back in, turn it on, verify the data is there and the drive state then go on to the next one. That way if one should fail your data is avaialble through parity calcs. Good luck, let me know how things work out fi your try it.