Sign in with
Sign up | Sign in
Your question

NAS Build Questions & Critique

Last response: in Storage
Share
May 14, 2011 9:18:56 PM

Overview of my Home NAS 2011 Project
I’ve currently got no decent storage at home and I’m in need of a professional, mature storage solution. I’ve got a ton of data I need to store and keep safe. I want to build a solution that will last for years. I’m a network engineer and have rack in my basement this solution will go in. So, on to the project! I’ve got an older AMD board w/ a 4600+ cpu & 8GB of DDR2 installed that I’m trying to recycle into a NAS build. The hardware for the build is listed below.

EVGA 122-M2-NF59-TR AM2 NVIDIA nForce 590 SLI MCP ATX AMD Motherboard
http://www.newegg.com/Product/Product.aspx?Item=N82E168...
I’ve already got this board, it’s in a system not being used. The CPU is a 4600+, 8GB of DDR2 G.Skill & a 750watt PSU.

NORCO RPC-3216 3U Rackmount Server Case - OEM
http://www.newegg.com/Product/Product.aspx?Item=N82E168...
Planning on this NAS Server case. It’s got 16 hot-swappable sata I/II HD chassis and 4 internal reverse 8087 mini-sas connectors.

LSI MegaRAID Internal Low-Power SATA/SAS 9240-8i 6Gb/s PCI-Express 2.0 RAID Controller
http://www.newegg.com/Product/Product.aspx?Item=N82E168...
Most importantly, this is a PCIe 2.0 card. Note that my board does not have any PCI 2.0 slots, but it has 2 PCIe1.0x16. It has 2 8087 connectors, which provide connectivity for 8 sata II drives. I’m planning to buy one of these cards to start, and a second sometime in the future if I ever need more than 13TB of storage.

SAMSUNG Spinpoint F4 HD204UI 2TB 5400 RPM 32MB Cache SATA 3.0Gb/s 3.5" Internal Hard Drive -Bare Drive
http://www.newegg.com/Product/Product.aspx?Item=N82E168...
These seem to be a good buy.


Here are my questions & concerns:

1. PCI2.0 card in a 1.0 slot
The biggest question I have is: Can anyone confirm that the PCIe2.0 Raid Controller will work in my MB’s PCIe1.0x16 slots? From what I’ve read, it will work, but it will only have 3.0Gbps of bandwidth. Since this setup will never come close to pushing those speeds, this should work without any problems. Right?

2. 8087 Mini-SAS & SATA II
It seems all the new raid controllers are using these connectors. I think that the case, controllers, and drives I’ve selected should support this architecture. Are there any considerations I’m missing here? Any other design considerations I should reconsider?

3. OS/FS/NFS
I’m planning on using FreeNAS 8.x as the OS with ZFS as the FS & NFS as the NFS :) . I’ve got linux, osx & windows hosts on the network. Planning on a future vmware esx implimentation. Looked at some comparisons between NFS/SMB/AFP/iSCSI - it looks like NFS will work best in general over layer 2. Is there a better solution out there?

4. Raid Design
I currently don’t have a NAS or backup solution at all. I’ve got approximately 5TB of data at home in misc locations. No centralized storage or backup - not a good situation to be in. I know that a single host with a raid array is not a backup solution, but it’s all I’m going to have for a bit. With this in mind, configuring the disks as a raid-5 array be the best option?

5. Backup Solution
My goal is to get this up and running that I have some redundancy in my storage (yes, this is only a first step) and not a viable long-term backup solution. Once the first NAS is up and running, I plan to build a second NAS using the same case, raid controllers & cards, but with a better motherboard (specifically with PCIe 2.0 slots), processor, and RAM. This NAS would become the primary, and the secondary (the one I’m currently building) would simply be configured to mirror the primary. Any problems with this idea? Any suggestions?

6. Future Expansion
I’m starting off with a single MegaRaid card and 8 2TB drives. This should give me approximately 13TB of usable space. Based on the build, I should in the future be able to simply buy a second card & 8 more disks, creating a second 13TB raid volume then adjust the ZFS volume to include that raid array, increasing the total size to 26TB. No problems there, right?

7. What have I not considered? What problems have I missed?
I’m a network engineer, not a storage guy and this is my first NAS build, so I know there are probably a lot of things I’m not considering or can do much better. Suggestions are very much appreciated.
a b G Storage
May 15, 2011 12:37:00 AM

1) Yeah you'd do well to saturate the bus so presuming the card functions correctly under pci-e 1 it should be fine.

2) I think this should be fine.

3) It's my understanding the FreeNAS is very specific to only running a NAS. In the future you may want this machine to do more than FreeNAS can provide? Sure if you pick up a full linux distro you'll have to configure some things yourself but you'll have some more flexibility?

Be sure to check that your RAID card is supported by whatever version of FreeNAS you're planning on running.

Shame FreeNAS doesn't do ext4. Curiously on the ZFS wiki page it seems to suggest that the added data integrity that ZFS provides would be useless with a hardware raid card:

"ZFS can not fully protect the user's data when using a hardware RAID controller, as it is not able to perform the automatic self-healing unless it controls the redundancy of the disks and data. ZFS prefers direct, exclusive access to the disks, with nothing in between that interferes."

4) The thought of an 8 drive RAID 5 array scares the bejesus out of me. Two drive failures and the array is toast. Even adding a hot spare doesn't help me sleep easy!

Consider starting with a smaller set of drives rather than going for the full haul. You said you had 5TB of data right now. Why not start with 4 or 5 x 2TB to give you 6/8TB. Raid arrays are completely limited by the smallest drive so you should be aiming to minimize the number of drives you'd have to change to see a decent capacity increase. Additionally postponing the purchase of capacity means the drives will be cheaper per GB.

5) Doesn't seem unreasonable but I have no idea about the value of the data you are trying to protect. It might be overkill or possibly not enough! Had you considered having this secondary NAS in another location to reduce the impact of theft / fire or flood damage? It's always good to ask yourself "is this data worth the effort" when considering data protection to prevent choosing the wrong solution.

6) I'd just pick up another RAID card when the need arises. I'm unsure of the performance implications of a filesystem spanning two separate raid cards.

7) You might want to consider picking up green drives. I'd always recommend that you buy drives from different manufacturers to reduce the chance of drives failing at the same time.

I'd recommend you don't install the OS on the storage array you are creating. Have another disk with the OS on it. It simplifies things somewhat.

Whilst it might seem like a good idea to use older components you should consider the power draw, heat and life expectancy. The PSU is far too overpowered for this system and will not operate efficiently at lower wattages. I'd consider only using 4GB of RAM for power consumption reasons. I'm assuming you have a GPU for this system but will remove it when you've got the system set up?
m
0
l
May 15, 2011 5:24:07 PM

Thanks so much for the thought out responses! You gave me a ton to think about. Gonna go do some more research now. :) 

1854354,2,447046 said:
1) Yeah you'd do well to saturate the bus so presuming the card functions correctly under pci-e 1 it should be fine.
I'm still a little squirrley on this, I think I may contact LSI and ask them the question directly.

1854354,2,447046 said:
3) It's my understanding the FreeNAS is very specific to only running a NAS. In the future you may want this machine to do more than FreeNAS can provide? Sure if you pick up a full linux distro you'll have to configure some things yourself but you'll have some more flexibility?
Part of the goal of this project is to separate my storage from my services. I've currently got a Windows 7 Box with an Intel Core2, 8GB of RAM and 2TB of local storage that is running VMWare Workstation 7.1 and hosting several linux servers for me. Future plan is to build an actual ESXi box for hosting these servers, but I'm not there yet. In any case, this box is going to be dedicated for just storage :) 

1854354,2,447046 said:
Be sure to check that your RAID card is supported by whatever version of FreeNAS you're planning on running.

Shame FreeNAS doesn't do ext4. Curiously on the ZFS wiki page it seems to suggest that the added data integrity that ZFS provides would be useless with a hardware raid card:

"ZFS can not fully protect the user's data when using a hardware RAID controller, as it is not able to perform the automatic self-healing unless it controls the redundancy of the disks and data. ZFS prefers direct, exclusive access to the disks, with nothing in between that interferes."
Hmm, interesting. I didn't realize this. Maybe I need to rethink my design if I'm going to be using FreeNAS/ZFS. It certainly would be less expensive to eliminate the dedicated raid controllers, but then I'm concerned about drive failures and hotswapping the drives ... hmm.

1854354,2,447046 said:

4) The thought of an 8 drive RAID 5 array scares the bejesus out of me. Two drive failures and the array is toast. Even adding a hot spare doesn't help me sleep easy!

Consider starting with a smaller set of drives rather than going for the full haul. You said you had 5TB of data right now. Why not start with 4 or 5 x 2TB to give you 6/8TB. Raid arrays are completely limited by the smallest drive so you should be aiming to minimize the number of drives you'd have to change to see a decent capacity increase. Additionally postponing the purchase of capacity means the drives will be cheaper per GB.
I really don't need this thing to be high performance, so I wonder if I should look into letting ZFS handle the all the disks directly.

1854354,2,447046 said:
7) You might want to consider picking up green drives. I'd always recommend that you buy drives from different manufacturers to reduce the chance of drives failing at the same time.
Interesting.

1854354,2,447046 said:
I'd recommend you don't install the OS on the storage array you are creating. Have another disk with the OS on it. It simplifies things somewhat. Definitely agree.

1854354,2,447046 said:
Whilst it might seem like a good idea to use older components you should consider the power draw, heat and life expectancy. The PSU is far too overpowered for this system and will not operate efficiently at lower wattages. I'd consider only using 4GB of RAM for power consumption reasons. I'm assuming you have a GPU for this system but will remove it when you've got the system set up?
Very good points. Depending on how much power it's using, I'll consider putting in a lower power PSU, taking out some of the RAM and possibly replacing some of the fans.

... I think I need to read up more on ZFS. Since this is a dedicated storage device and won't be server more than 1-2 users at a time, I don't think I need a high-performance raid, so the dedicated controller is probably overkill, I was just trying to ensure that I have hot-swapping capability in case of drive failures (there's probably a less expensive, better way to do this for this setup) and enough SATA II connections to handle up to 16 drives. I've got some research to do, bbl. :) 
[/quote][/quote][/quote][/quote][/quote][/quote][/quote]
m
0
l
June 1, 2011 9:26:01 PM

ZFS needs the drives to be JBOD. If you want to make ZFS faster add more RAM and or a SSD drive to the mix.

ZFS can use the SSD as high speed cache. I know this from the high end Solaris configurations I've seen. I am not sure how FreeNAS is setup to implement it just yet.

Also I would never use WD green drives in any RAID of any form. The performance is terrible and they are known to self destruct under RAID 5, or RAID 6


FreeNAS suggests a minimum of 6GB of RAM for ZFS storage. 12-16GB of Ram would be better. It doesn't have to be low latency, just make sure it passes Memtest + tests 3, 5 and 7 before using it.

Anyhow. ZFS also supports deduplication compression and a pile of other valuable storage related features.
m
0
l
!