i currently have a Promise ST EX8650 RAID controller with four 1TB drives (WD10EADS) in a RAID5 configuration, giving me a 3TB logical drive. i've had this setup for 3 to 4 years with no issues, but it's starting to fill up so it's time for a bigger array.
the whole point of having a controller capable of hosting eight drives is so that i could use only four slots per array, this makes migrating data from the old array to the new one super easy. so just adding more 1TB drives to my current array isn't what i want to do. promise support has informed me that 2TB is the maximum physical drive size my controller will accept, so unfortunately 3TB drives are out (i've confirmed this by plugging a 3TB drive in, it only saw 2TB). and spending $500+ on a new controller capable of hosting 3TB drives isn't a consideration. so i'm stuck with buying 3 or 4 new 2TB drives for a new RAID5 array (nothing to complain about).
i'm not willing to purchase drives from any online retailer, i'm wary of having mechanical harddrives shipped.
price isn't a big concern (though $200 enterprise HDD's are out), throughput isn't a big concern; but power efficiency is a concern, and reliability is a concern.
the seagate and the WD both use advanced formatting, would that be an issue in a RAID? my current 1TB drives don't use AF so i'm unsure. i can't find much info on the hitachi, does it use AF too?
which one has the lowest idle power consumption? (tom's graphs seem to be outdated)
in my personal experience i've never had a problem with a single WD HDD, no experience with hitachi drives, but i've had seagates die on me. a such i haven't bought a seagate in last 2 years, have they gotten 'better'? should i be wary of any brand these days?
all in all, which of the 3 is the best for my use?
i'll likely be stopping to pick them up today on my way home, so any advice given tomorrow will still be appreciated but probably too late.
thanks in advance!
The biggest problem is that hard drives configured for consumer use disagree with RAID controllers. I'm desperately searching for the correct term; if I find it, I'll update this post.
If the hard drive encounters an error, it will spend a certain amount of time trying to recover before it times out and reports the error. On consumer drives, this is useful. In RAID arrays, the controller is expected to handle this functionality and expects the drive to report the error quickly. The controller sees the long interval with no response from the drive as a failure and marks the drive as failed.
Enterprise-grade drives meant for RAID are set to cooperate with this by default. On some drives, the parameter can be set with software from the vendor.
That said, I wouldn't build a RAID array with Green drives. They get green, in part, by spinning more slowly, so they have slower transfer rates. The only drive in the above list that I would put in an array is the Hitachi.
I know this thread is old, but this is just to share my experience, and answer questions :
I have the Promise EX8650 model, with 256mb cache, under Win7 x64.
My motherboard is an Asus nvidia 650i based P5N32E-SLI from 2006, with a Core2 QX6850 @ 3.33ghz, latest bios, and 8gb Ram. Crucial M4 SSD as system drive.
I am currently running 4x3Tb Toshiba (DT01ACA300), 7200 rpm, 64MB Cache SATA 6.0Gb/s, set-up in Raid 5 Array, for an effective 9Tb drive.
One couldn't boot on it as is with a bios machine... because of the single 9Tb Logical drive...
But one could easily create a <2Tb Logical drive to permit boot, and use the rest (~7Tb) for another data drive. I already did that on a previous 3Tb installation on the same controller without problem.
My 9Tb drive is very stable, well recognized under Win7. Absolutely no issue.
350Mb/s average transfer rates (Hdtach).
My EX8650 specs are :
- Revision A3 (late 2008)
- Bios 3.00.0000.95 (firmware SR4.2)
- drivers 5.01.0000.04
I bought the same Toshiba drives to use with my Adaptec 3405 controller after having issue with WD green drives dropping out of the array. Only finished rebuilding the array this morning so hoping this is sorted now.
These drives come with the SCT timeout setting but it's disabled by default. You can use smartctl to change but I cannot find a way to access the drives behind the controller in windows.
How did you do to rebuild an Array after changing the drives? Did you first copy all the data out? Or did you use an additional spare controller?
If you're sure SCT can be activated on the Toshiba drives, I think this could be done by
1) unplugging one drive at a time,
2) plugging this drive on a Linux PC (that wouldn't try to repair it or mess it up)
3) running smartctl and activate SCT
4) plug-back the drive on the RAID adapter.
5) boot and control if something wrong occured on the RAID5 Array. The worst you would have to do is rebuilding the Array...
6) After checking that all is ok, do the 1-to-5 trick for the remaining drives...
Personnaly, I think i'll live without SCT timeout...
I don't want to take risks trying to activate SCT, then loose a drive and spend the night rebuilding...