I just received a Synology DS1010+ NAS and plan to populate with 5x 1TB drives in a RAID 5 config.
I am interested in comments on which 1 TB drive will be best for our application. We store lots of video, photo, 3D modeling and other mid- to large-sized files on our server. Occasionally I write a directory of small files to the server. 1-4 users will access this server all day long.
I have read many reviews of various disks and am very familiar with the Synology NAS compatibility list. Few reviews here and elsewhere benchmark drives in multi-user scenarios where the drives must respond to many requests at once. Which drives do forum members have the best experience with in this capacity and application?
I see that the Caviar Black drives are listed as compatible but also see many many posts on the net about TLER problems down the road. Is there something about Synology firmware that prevents this issue from causing problems?
I have a Synology DS509+ running 5 Samsung EcoGreen 1.5 TB (http://bit.ly/a4D4Ux)drives and the performance for sequential reads is OK but very VERY slow when I write lots of small files or multiple users access it at the same time. I want to avoid this problem on our new server.
Most, if not all, the ECO/green/energy saving series of hard drives are only 5400rpm spindle speed and that could be a reason for the slow sequential reads.
Given that I do not see a hardware RAID controller listed on the Synology DS509+ Data Sheet leads me to believe that the RAID implementation is software RAID; which would also lead to slow reads/writes. Being that the DS1010+ has a dual core proc instead of a single core may help offset eating cpu and RAM cycles and improve read/writes.
As far as drives go, any 7200rpm drive with at least 32MB cache something like the Samsung Spinpoint F3 HD103SJ or the Western Digital Caviar Black WD1001FALS would perform better than a 5400rpm drive with the same cache.
Another thing to keep in mind is that you are dealing with consumer class 3.0Gb/s SATA drives and there are inherent speed limits. To eliminate possible bottlenecks, then perhaps you need to look into getting SSD's instead of magnetic drives, or a NAS with a dedicated hardware RAID controller, or even think about getting a NAS that supports SAS or SCSi drives.
Its probably that box' internal RAID5 which is slow. Sequentially, 5400rpm drives are not even that much slower than 10k rpm drives; the difference is small. The rpm's are important when you do random I/O; or access small files with multiple users. But for storing large files, 5400rpm disks are the prefered and logical choice.
If you would replace the NAS with a real pc and run some advanced operating system (linux,bsd,opensolaris) - you should get much better performance even though its software RAID (software RAID is superior to hardware RAID in many ways).
Also checkout whether ZFS is something that suits your needs. It would be alot safer to run ZFS+RAID-Z than just a poor RAID5 implementation. With 5 disks you should be getting at least 400MB/s+ reads.
When I bought my NAS (diskless version), had I bought the models with disks pre-installed, I woulda gotten 7200.12's. Others I looked at came same way. I'm guessing main reason. I'm guessing that's cause they have the lowest sound level and running temperature of any comparable 500 GB per platter "non green" drive.
Check out the performance charts and pick whatever 500 GB per platter drive performs best under your usage patterns. The WD Black 2 TB is a good choice but at smaller capacities, you are limited to the Seagate 7200.12 or the Spinpoint F3. In single disk configurations, the 7200.12 excels in gaming, multimedia and pictures whereas the F3 wins at music and movie maker. See the comparisons here (copy past link in manually, link won't work in forum):
At this point the only option seems to be something like the WD RE3 1 TB drive which is $60 more expensive per drive than the Caviar Black ONLY because it has TLER turned on. This is an especially annoying move by WD. I may just have to buy these.
With 4 people accessing these drives, I don't the typical file server benchies (100s I/O request\s) are applicable to the kind of use they will be seeing in your installation. Why not pop over to the forums for your RAID hardware and throw out a question there. I have 7 machines in my home office, no more than 3 typically in user at any one time, accessing CAD files while my kids are accessing music and video files and my I/O is very very low.
You want to have either Raid Edition drives, or if you don't want to pay the extra cost, use WD Caviar Black drives and run the TLER utility on them to set the Time Limited Error Recovery bit.
I recently had a 4TB Raid-10 set up as a Linux software RAID on those drives. (8 TB drives, all WD Black.) It handled seven visual effects workstations loading large scene files, rendering frames, and viewing frame sequences from the array over the network. The gigabit lan was the bottleneck.
Now, whether the NAS box you put them in can handle the throughput, is another question... I would shop for those VERY carefully. Most, and I do mean most, of them fool you with a nice LCD panel and a 4 or 5 drive hot swap bay, but their RAID hardware is pathetically underpowered. I've seen 4-drive NAS stations get reviews claiming 2 to 5 MB/s throughput with a single user transferring a single large file. There is the occasional gem that gets reviews stating reasonable performance but I don't remember at the moment what brand that was.
You could probably build a "real" computer with good-enough RAID hardware for what it would cost you to buy one integrated solution, throw it away because it doesn't work, and buy another.
Side note: The processor usage on that 8-drive RAID-10 was not insignificant. It was running on dual 3GHz P4 Xeons, and frequently brought the processor up to the 80% mark or so during heavy accesses. It never flatlined at 100%. (Compare that CPU capacity to what is probably in the cheap integrated RAID boxes.)
I am aware of the issue regarding NAS boxes vs "real computers" and the tradeoffs. For us the increased config and management costs of a linux beige box RAID do not seem worth it. I realize I am overpaying for NAS hardware but the reliable built-in firmware, permissions management tools and utilities make it worth it for a small shop like ours.
If you are aware of a good intermediate solution I would love to know about it. I have spent time before configuring CentOS boxes and perceive it as a major time investment, and have never even touched RAID on linux.
I believe Synology is one of the brands that gets good reviews. It sounds like you've done as much or more shopping than I have. If the manufacturer of the RAID box says you don't have to turn on TLER, I trust that. It all boils down to how long the RAID controller will wait if it can't access a drive and Synology would have that information. I imagine consumer and small business NAS boxes would be more tolerant with timeouts but I don't know for sure, so I try to spread the word about the TLER bit because it avoids a potential problem that people don't often consider.
CentOS is exactly what I used for this recent storage. I tried Fedora and it was an abysmal failure. It was somewhat problematic to get running at first, but was very good after that. I was able to partition the drives and do different types of RAID across the different partitions, which was nice. I had about 15% of each drive partitioned off and those striped into a Raid-0, with the remaining 85% partitioned separately and put into a Raid-10. Turned out the Raid-10 was so fast the Raid-0 was unnecessary though, especially with the network as the bottleneck. Our operations were very large-read-heavy and Raid-10 performs close to Raid-0 at that.
I share your thoughts on Linux. Even if you have a Linux admin on hand, if you build essential storage on Linux it means you have to continue to have a Linux guy on hand in the future. I ran into one problem where the drives' write cache had to be disabled or the array would periodically seize up for 10 seconds and then resume. After getting that sorted the machine sat in the corner with no attention from me, but who's to say that would continue for years after that. Also I hear with software raid it's more difficult to rebuild the array onto a new disk if one drive fails.
The box I built only had to run for four months, then our project was done and I dismantled it and used the drives for something else. If the project had been longer I probably would have used a Windows server or an integrated solution.
After closer review of the DS1010+, I believe you have found a better NAS than any I have seen on the market yet.
In fact I think you've answered my search for a decent NAS!
The cost is too bad... I would pay $300 or $400 for a diskless system like that. Above that point it becomes more cost effective for me to build a box and put windows server on it. I can see the attractiveness of integrated firmware in your situation though, not wanting to have to pay for time devoted to keeping it updated and running properly.
Suggestion: Unless you need huge amounts of storate, buy 500GB drives to make backups easier. Backing up our 4TB volume over network would easily run all night, and sometimes it would still be going the next morning when people showed up and needed to use the server. (Of course I was doing it the slow way and just copying the whole volume to a new folder on a different RAID.)
Just an update with the latest offerings for a NAS and HDD combination I have been feverishly researching for the past 2 months.
Again, Synology. (Synology 1511+) Solid hardward, upgradable RAM, disk expansion and even accepts 'add on' expansion units allowing up to 15 disks in your array. SUPERB OS, functions, apps, useability etc etc. An inteligent RAID solution maximising disc useage.
More to the point, the HDD's. The Synology unit can handle the new 3TB drives (though I'd stay well away from those for the time being) and also supports the new 6Gbps SATA III drives. Much of my research was on the NAS, that is done - there's really nothing else I could wish for that the Synology 1511+ does not supply (and the transfer rates are fantastic).
Enjoy - I hope this helps other out there that have been doing the same exhaustive searching I have. It's certainly not cheap, but would be an excellent set-up for 5 yrs+. The only thing you may need for multiple computers is a semi-managed switch to manage your traffic - but that's another thing... once I know a decent response to that I'll post back. GL everyone.
I can report good stability and performance after about 14 months of daily use. I populated a DS1010+ with Caviar Black 1TB drives in Jan 2010 and they have been performing well ever since. Transfer rates are right up there with those advertised for the 1010+