Home NAS Question

Hi all,

I currently have a Netgear NV+ NAS device which has 4TB of physical storage (4x1TB drives) arranged in a RAID 10 Mirrored Stripe configuration leaving me with 2+ GB effective space. If I want to increase storage, I could upgrade to 1.5TB drives, but I'd wouldn't be able to take advantage of the 1TB drives and would most likely have to sell these at an extreme discount.

Thus, I'm looking at a more robust future proof solution. I've been looking at building a small rack that might house all my gear, my servers, and one of those 16 bay SATA units that could accommodate multiple vendor / size hard drives in a RAID capacity. An example of one of these units:


I anticipate I'll have to spend around $1000 bucks for the barebone setup.

Any thoughts / better ideas to accomplish what I'm looking to do? Any cheaper solutions?

12 answers Last reply
More about home question
  1. Well, I'm thinking the most straightforward and affordable solution would be to just buy a new NAS unit for your new drives, no?
  2. Price of HDDs $/GB are constantly dropping so selling them at very low price as 2nd hand is unavoidable.
    RAID 10 is not economical for storage, especially if they're used in low-powered NAS enclosure where speed is capped by GbE. And RAID5 is unsuitable as those enclosure don't have the processing power to do parity calculation at decent speed.

    Basically, victims of overpriced easy-1-2-3 small NAS enclosure. Just because they're affordable doesn't mean they aren't overpriced.

    Seeing you're prepared to go up to $1000, if you don't mind getting your hands dirty with hardware it's possible to build a super-performance, high efficiency, highly expandable (10 drives) NAS unit for less than $600, excluding HDDs. The only downside with that option is a bit of learning required to get it up and running smoothly on the software side initially. And taking time to pick the right parts which we'll help you with.
  3. Thanks for you input. I don't mind getting my hands dirty. Do you have any suggestions in terms of what that $600 solution might be?
  4. the supermicro chassis you have linked is very nice, but theres no way it's going to come in under $2k (and thats not including any hard drives).

    raid 10 (1+0) is not very well regarded. you'd probably do better with raid5 if you are trying to prevent data loss, or if data loss would really ruin your day go with raid6.

    you are likely going to be forced to replace hard drives; have you considered how you are going to get your data off your old raid and put into a new one? at best, you can hope to switch from raid 10 to raid 0, and salvage those two hard drives, transfer data over, and add the other two. im not sure if the netgear nas will allow you to do that or not -- or if *any* change to the raid will make it want to reformat everything. :(

    most of us that do raid would never, ever, have different hard drive models, much less different capacities. it's an invitation to headaches (or worse). normally you want them all to have the exact same seek time, spin speed, etc... think about a file being spread to a bunch of hard drives, some fast, some slow, and the havoc it'd play on the buffer and cpu of the raid card. the card could think there is an issue with the slower drive(s) and keep attempting to rebuild them... hate to tell you to suck it up and buy all the same model, but i'm in sorta the same place... have six 1.5tb 7200rpm drives, and want to go to a 12 or 18; and i'd greatly prefer the newer 5900rpm 1.5tb drives. can't mix and match, so buy more of the 7200 or toss all my 7200rpm and buy all 5900rpm (ouch). You don't actually need to dispose of those drives, see way below for how to use them productively.

    my best advice to you to proceed: is to first decide on what raid level you want first, i'll get flamed for saying it, you either want raid 5 or raid6... that is going to determine which raid card you are going to purchase...

    second, pick the max amount of storage you are going to use. (by *max* i mean the most you'll ever put on a raid in the foreseeable future). use the max storage number to calculate the number of drives you are going to need. Right now 1.5tb and 1tb drives give the best $/gb ratio. add one to the total number of drives if you picked raid5, add two to the total number if you picked raid6.

    you need a chassis (or case) to house the drives. you'll probably want to give a slight preference to a chassis that has hot swap bays like the 836tq-r800b you picked out, if it does not have hot swap trays, you want to make sure the case design allows for decent airflow. a redundant power supply is preferable.

    you need a server motherboard, one that supports ECC memory. it doesnt have to be new, or particularly powerful. a used pentium4 >1ghz with 1gb ecc ram is fine. extra bonus points if the motherboard has any kind of onboard graphics.

    if the server doesn't have gigabit LAN built onto it, you'll need a gb nic.

    You are going to plug this beast into a UPS of some kind. it doesnt need to be particularly beefy, it just needs to be send a signal to your NAS that says 'holy crap the power is out, shut yourself down.' so like 5min of ups power should be more than enough. i'm going to suggest you get a used APC (or similar name brand); make sure it has a serial or usb port on it, so it can talk to the NAS. --if the power just shuts off, the raid card is going to lose what it had in memory, and you are going to lose data.
    - - - - - - -
    a good deal on an 8bay chassis similar to the one you have linked is US$400-600, more bays don't necessarily increase cost much, so 10 or 12 bays might be in the upper end of the price range. while you could probably get away with picking up any 4u chassis and makeshifting something to get it to work, the price new is actually going to be less than if you bought the pieces (chassis, hot swap bays, sata backplane, etc) separately.

    a used server motherboard, CPU and ECC ram really shouldn't be more than US$100-150.

    software: FreeNAS. it's free, it's awesome, it's not overly difficult to learn or use, it'll do whatever you need it to do and much much more.

    raid card: you are going to pick the card based on a couple things,
    +first you need it to support the max number of drives you are likely to use.
    +second, since you will be adding drives to your raid as your storage needs increase, you want OCE (Online Capacity Expansion -- fairly easy, you need more space, you plug more drive(s), the raid takes a couple hours to shuffle stuff, and then poof! you have more space available).
    +third, you need it to support whatever raid (5, 6, whatever) you decided on
    +fourth, you need to decide if you want the raid to spin down idle drives. You save $ by decreasing electrical-power demand if you spin down drives, but the tradeoff is that there is going to be a slight wait when you attempt to access the nas while it spins the drives up. [Adaptec and Areca products can spin drives down, Promise products dont have that feature.] New Areca cards with 12 ports run around US$650, 8ports=US$450 (1port=1drive). You can find used RAID cards at significant price savings; LSI makes RAID cards for Dell, so you can look for "PERC 5/i RAID" cards for RAID5, and i think "PERC 6/i RAID" cards support RAID6... I'm fuzzy on prices for these, i'm going to take a wild stab at it and say the PERC5 cards are $200-250 and the PERC6 are ~$400. if you are going to go the used perc card route please know that they make perc cards that are raid and those that arent. thus "perc 6/i" in the title does not mean that it is a raid card; look for "perc 6/i raid" or something like that...
    - - - - - - -
    forgot to add: SAS = SATA in raid card world... SAS is backward compatible to all sata drives on the market now. so a 8port sas raid card will woth perfectly with 8 sata drives... might want to wikipedia it if you are unclear... best of luck to ya.
  5. ...alternatively, you could forgo purchasing a raid card altogether and going with software raid. it would be a little more CPU intensive, but if it's for home use, and you aren't looking for something that can do loads of simultaneous transactions (and with a UPS attached) it'd probably suit your purposes. it'd bring the price down to your original 'under $1k' marker.

    Habey makes some decent quality yet inexpensive rack-mount storage cases/chassis/servers. check them out at mwave.com (retailer), www.habeyusa.com (the mf website), or ebay.
  6. Speaking of Habey chassis, I was looking at some for my own builds last night on eBay. They're between $180-290 + $45 shipping.
    The 3U option has 16 hotswaps, but I didn't like its limited 4 slot expansion where the 3 remaining slots on an ATX motherboard is hidden under the PSU. In my case I wanted to pack in two controllers and the remaining slots filled with TV tuners.
    The 2U chassis on the other hand has 8 hotswaps only, uses low-profile cards and takes a 2U PSU.

    Personally I'd go for the U3 chassis and make sure to pick a motherboard that concentrates all PCIe slots to the first four positions.

    http://www.habeyusa.com/IntustrialChassis/esc3161/3ubacksmall.jpg or http://www.habeyusa.com/IntustrialChassis/esc2081/2ubacksmall.jpg

    Reusing P4 or A64(S940/S939) or any platforms that's older than two generations is something I'd never do, for long term power consumption and flexibility issues. 130/90nm processors and old motherboard VRMs are no good at idle compared to a modern platform, which is what most home fileservers will be spent doing.
  7. This is what I've drawn up for my build:
    -CM Centurion 590 $60 (NewEgg)
    -300-350W 80Plus PSU $40 max.
    -Three 5x3.5" hotswap "Enlight EN-8721" $60ea (eBay)
    -ECS BLACK SERIES A790GXM-A AM2+ $100 (NewEgg), dual GbE, two PCIe 16x
    -AMD Athlon II X2 240 $60 (NewEgg)
    -Kingston KVR800D2E6K2/4G, ECC unbuffered $63.50 (NewEgg)
    -2x Dell PERC 5/i (already got one) $130ea incl. BBU+cables (eBay)
    -Fill the rest of PCIe 1x and PCI slots with dual DVB-T and DVB-S2 cards

    Total: $763.5 + shipping (or -$130 as I needed only one PERC 5/i)

    That gives me a flexible option of running either VMWare ESX or Server 2008 R2 as the base layer then run whatever I want on top.
  8. Thanks again guys for the great info. Some (actually quite a few) follow up questions:

    1) I like the habey site. If you don't mind, take a look at the following cases:


    I can't really determine the fundamental differences between these cases. One thing I noted: the first one claims RAID6 capability, and the rest don't. Also, are these just storage enclosures with no MOBO or PROC? I guess the eSATA connectors would then connect to another box with the RAID controllers in them?

    2) You folks keep mentioning ECC memory vs. DDR memory, and getting a server MOBO. What is the difference btwn these boards and desktop based boards and/or why get this type of MOBO? Only type that fit in some of these cases?

    3) I currently have a desktop multimedia "workhorse" that I'm thinking about upgrading at some point to get into next generation Intel tech (e.g. i7/X58). Should/could I repurpose some of the following into my 'super NAS' device?

    EVGA 680i MOBO with onboard 6 port RAID
    Intel Quad Core 2.5 GHZ PROC
    8 GB DDR2 RAM
    1000 W Powersupply
    Nice Graphics card

    Maybe I could build a box like Wuzy is mentioning to run VMs for different apps (e.g. freenas, windows storage server, web server, etc), and have this control the habey storage array?

    4) Ultimately my end goal is to push as much of my heavy equipment into a rack in some closet or room in the basement, and only go down there when necessary. Ideally, I'd just have a monitor, keyboard, and mouse sitting on my desktop. I would imagine I'd need to consider KVM over IP. Any thoughts here would be appreciated.

    Thanks again!
  9. Those 4 enclosures you linked to are external DAS enclosures through the use of SATA expander backplanes (hence 4x eSATA turning into 12x SATA bays), they're just physical connections, nothing more. To get RAID6 on any of them you'll still need buy a suitable controller supporting it and put it into the host machine. Space and energy wise they're not very efficient as you still need another rack to house the host.
    The usage of external DAS are diminishing these days as they're replaced by SAN for huge bandwidth corporate usage shared by multiple servers while smaller businesses use solution like what we're building here, a high-end NAS.

    What ECC does is correct single flipped bit cause by something like ambient cosmic ray which can cause unoticed data corruption, in short it's like buying insurance. You hope you won't ever need it, but it's there. Statically speaking with todays memory module density you only need RAM with ECC once you start using 4 modules or more. But since the price between DDR2 ECC and non-ECC are so small today compared to a few years ago, you might as well take advantage of it. It's one of the many reasons why my proposed build went the way of AMD.

    You have to ask yourself this first; what sort of "home NAS" are we trying to build here?
    The answer is we are trying to build a NAS that has power characteristics of those off-shelve NAS (~50W idle with all drives spun down, upto 250W @full active load), have SMB-class speed+reliability, runs 24/7 and cost a fraction compared to commercialised ones.

    The clients which for me are two thin (almost invisible) HTPCs and one gaming/workstation, two laptops and my PDA-phone. Each device has their specialised usage and fit into the power envelope they belong to.
  10. This is a relevant thread from another forum. I think I may go in this direction:

  11. Very nice find on the 4U rack!

    But the internals is where it most counts and the poster in that thread clearly falls into the inefficient NAS category with overpowered hardware and bad idle characteristics.
Ask a new question

Read More

NAS / RAID Storage