Sign in with
Sign up | Sign in
Your question

Suggestions: Big RAID6 NAS for TV Station

  • NAS / RAID
  • TV
  • Storage
Last response: in Storage
February 3, 2010 3:01:15 PM

Hi folks,
I'm doing some research on a storage solution for work. As the story often goes we have big storage needs and a small budget.

We're a medium-sized television station, and need a place to store large video files until we need move them to our on-air server for playout, which has a very fast (and expensive) 4TB RAID box. Our automation system will move the files around automatically via FTP as needed, so we don't need anything screaming fast.

Our thought is to go with two large, mirrored, RAID 6 NAS boxes, somewhere upwards of 40TB usable each for maximum redundancy. I'm seeing a lot of vendors with these at very reasonable prices, but not necessarily any that we are comfortable could provide a decent level of support. Was hoping someone may have had experience with any the vendors noted below or had suggestions for other vendors/options to look into given our needs and budget.


Budget: ~$50k
Needs: Off-the-shelf SATA drives, RAID 6, redundant power supplies, GigE, FTP and CIFS access, mechanism to mirror the two systems (could just be a once-a-day rsync)

Digilant R90148AD-NW -

Celeros EzSANFiler XD Series -

Abrdeen Inc, AberNAS 895 -

Coraid - NAS Gateway + x EtherDrive SR Series

More about : suggestions big raid6 nas station

February 4, 2010 1:44:11 AM

I responded to someone else's similar post with this

when you start getting to those sizes, you run more chances of corruption and failure, so you need to use more drive space for redundancy.

I would personally get some simple low end server boards, drop 4GB of ECC memory in them, install linux, use ZFS+RAID-Z, put GLUSTER on it and share it out as a network drive.

It would be very resilient to failures and data corruption.

edit: Samsung has some new 5400rpm $183 2TB drivers are are meant to be very low power.
a c 127 G Storage
February 4, 2010 2:12:17 AM

If you do go ZFS, you would probably be better off with either OpenSolaris or FreeBSD. I use ZFS on FreeBSD8 in RAID-Z config, and the flexibility is very nice.

For additional data security, you can set copies=3 on some directory so it would store three copies of each file, on different physical harddrives. If you want RAID-6, you can opt for RAID-Z2, which is advisable if you go beyond 8 drives.

Assuming you need 40TB usable space, that would mean something like:
24 * 2TB = 48TB raw capacity = 44TB RAID-6 overhead = 40,02TiB usable space.

So 24 drives. That means three 8-port controllers, without RAID. For example these may do:

You can connect 8 disks to each of such a controller, which should be under $200.

So price may be:

Computer system = $1000
24 Samsung 5400rpm 2TB = $4392
3 controllers ($150) = $450
Total hardware price = ~$6000

You need 2 of them, so at least $12k. Additional cost may include accessories and and man-hours.

The software part, would include installing the operating system of choice on the separate system drive - a small SSD would be great for reliability/uptime reasons. Then you would use zfs to create a new RAID-Z2 array, like this:

zpool create tank raidz2 { all your data drives here }

If you choose FreeBSD, you should use glabel first:
glabel label disk1 /dev/ad4
glabel label disk2 /dev/ad6

then you use ZFS command like this:

zpool create tank raidz2 /dev/label/disk1 /dev/label/disk2 ...

I'm assuming this is a DIY-situation, or are you going for an all-round solution with service packages etc? In that case, you may not have the flexibility to chose ZFS in an optimal configuration.

The fact that you opt for 2 mirrored machines is very good; and ZFS allows you to make incremental backups really easy using snapshots. A small cron-job (script) that runs every night can make a new snapshot and then synchronise from the master server using 'rsync'.

The CIFS access is easy using Samba, the FTP access can be done with either ProFTPd or PureFTPd. Please consider NFS-access too. The server systems should have at least 2/4 cores and would need to be 64-bit since that's more or less required for such a serious ZFS setup. Please allow plenty of memory - this will both speed up I/O due to caching and will allow ZFS to run in full glory. ZFS can use multiple gigabytes during heavy I/O.
February 4, 2010 2:03:40 PM

These are really helpful suggestions, thanks for taking the time to respond! I wasn't really aware of either ZFS/RAID-Z2 or Gluster, which both look like a good fit for what we're hoping to accomplish. This is a system that we'll likely need to continually expand, and it looks like Gluster makes that really easy.

One of the questions we're struggling with is whether to go with a fully vendor-supported solution (like Dell, who wants upwards of $50k for a single 40TB solution but can have someone on site in 4 hours, 24/7 to fix it if we have problems), or a hybrid solution where we get the hardware from a vendor like the ones in my original post but use something on top of Linux like Gluster. Since this a partially grant-funded project, I don't think building our own boxes from scratch is an option.

It looks like we could take two of those Digiliant or Polywell boxes and install Linux/Gluster and have some pretty robust storage with a lot of cheap options to expand the cluster in the future without worrying about ridiculous licensing or being locked into one vendor. I guess my main concern is the quality of the pre-built boxes available and how easy it will be to get parts for them in 3 years. That's why I was hoping to find someone who'd used one of them before. Would still be curious to hear from anyone who had.

I'm going to check out Gluster on a VM right now...
February 5, 2010 1:54:05 AM

I'm more of a knowledgeable hobbyist that has a decent amount of IT/Database/Programming/Networking experience. My advice was based off of what what I've read about. Not saying my way won't work, but I'd look around a bit more than just my random idea. :p 

Let me know how RAID-Z+GLUSTER goes, because I was thinking of setting up something like this at home some day in the future once I pay off my school debt.

edit: ZFS is very powerful and has some VERY nice features. Whatever you do, I'd try your best to have ZFS on your storage end of things.

edit2: Mesa sounds like he actually has experience in this area, he might have some more good input beyond what he's already stated.
February 5, 2010 2:10:20 PM

Thanks for the caveat emptor, we're doing as much research as possible before finalizing our plans. You guys pointed me a great direction, but don't worry we'll do our due diligence. Playing with Gluster on a VM now, and it's definitely something we'll consider.

I'm going to give these folks a call next week:

It looks like they have some boxes that can be pre-configured with OpenSolaris/ZFS/RAIDZ-2, which would give us all those advantages plus some vendor support. Hopefully it will fit in our budget.
February 6, 2010 10:45:20 PM

If you're a TV station and that data has to be available or people start getting fired, the benefits of immediate-response onsite vendor support are not to be ignored.

I'm all for building my own stuff, but if it's an important system used to support business operations, I can't guarantee my continued presence to mess with what I built, I always recommend something that comes with its own support.

Not that I don't think Dell's solution is way overpriced...

Here's the other thread where someone was looking for a reasonably priced 96TB solution:

There's a company, or maybe multiple companies, that build raid racks with the drives mounted vertically inside a 5U or 6U drawer. Very space efficient and gives a large capacity without having to build out onto multiple full racks. Check the Polywell link from jtric.
February 27, 2010 3:10:18 PM

For that type of budget you can start getting into a lower end NAS.

At least that I what I would do.

Would you need that total storage amount day 1? Otherwise, I'd look at a NetApp 2040a, with another to sync to.

Then just add more shelf's as needed.

You get fast speeds, but more important raid 6 ( raid-DP) and great support.