Freenas System that will Max Gigabit LAN

Help me build a FreeNas box please... With less than $1000

I would like it to saturate, or nearly saturate, a Gigabit LAN. As I slowly upgrade my home network to gigabit NICs and SSD drives I would like the file server to be able to keep up.

It should be able to handle a ZFS file system (with the new Freenas 0.8x coming)

Will one of these new Atom boards have the power to calulate parity and saturate my LAN?

Actually I am leaning more toward an i3 530 if I can find an ECC friendly board with 6 or more Sata ports. Any suggestions?

Thanks in Advance
16 answers Last reply
More about freenas system gigabit
  1. You should use the onboard NICs on modern systems or PCI-express based NICs; not any based on PCI.

    Also, you should look at a different protocol than SMB/CIFS, NFS is reported to work much better and saturate the available bandwidth much better.

    To use ZFS, a 64-bit CPU is strongly encouranged. I believe some Atoms support this, but certainly not the normal/original ones. You may also not strictly need ECC memory as checksums provide a reasonable protection against corruption. If it is for home-usage, i would go without ECC instead. For work/business, always choose ECC.

    ZFS is mainly memory sensitive, not that CPU intensive unless you will be using frequent use of encryption and compression.

  2. By the way, interesting board! It also looks like the Atom D510 supports 64-bit.
  3. Thanks Sub Mesa

    I've read a number of your posts and certainly respect your opinion.

    As to the ECC memory thing I have struggled with this for some time. I'm an electrician and not a computer database professional so I only know what others tell me. I have never had an error which I attributed to a memory error such as ECC is designed to look for. Wikipedia has a good article with some off the wall formula which states that each GB of memory could have a flipped bit somewhere between once every hour and once every century. If this is true for memory which is constantly used I'm willing to take my chances. Especially if, as you say, there is other protection in place.

    The best thing about this is that I have not found an Atom based board that supports ECC.

    I'll look into purchasing this board along with at least 2 GB of ram (for ZFS). When I put together an invoice I will post the results of my choices here. Wish me luck finding these things in Canada.

    Any other comments are greatly appreciated.

    I came across this board which is socket 1156, easy on power, and has 6 SATA ports. Interestingly it uses an Intel 82578 gigabit LAN controller.

    ECS H55H CM

    But, if I can get away with a D510 at less money and less power use... I'm in.
  4. So here's the rub.

    i3 530 system with Gigabyte H55M-S2H board and 4GB ddr3-1333 mem. will cost me $340

    Supermicro X7SPA-H-O board with D510 Atom and 4GB ddr2 sodimm at 667 Mhz is $300

    The main difference (besides the obvious i3 vs Atom) is the supermicro has dual intel NICs and the Gigabyte sports a single Realtek NIC.

    I guess the question I struggle with is whether the Atom will provide similar performance for my application. If the Atom can fulfill my dream of providing a FreeNas box for my family that can push 100 MB/sec... I'm done.

    Low power is not the end of it all for me. I pay about $0.07/kWh. It is hard to build a NAS that will cost more than $1 or 2 per month. Of course the Earth pays too but... So 5 watts more or less does not break the build.

  5. With the minor price difference, if power isn't paramount in your case, i would go for the i3 solution instead. It would give you extra muscle, while still being decently power efficient. You may have to buy Intel NICs if the realtek one doesn't work well.

    If you want to try with the Atom board it is certainly a nice board. However i'm kind of concerned that you may be getting lower performance because of it. Perhaps you should google a bit on Atom ZFS servers, and how they perform. If it is enough, it sure makes an efficient server. :)

    And the double Intel NICs are awesome of course.
  6. Yesterday I bought some components and here's what it looks like:

    Gigabyte H55M-USB3 USB3.0 Motherboard $111.99
    Intel Core i3 530 Retail Box $124.01
    ASUS DRW-24B1ST 24X SATA DVD $19.99
    OCZ OCZ3G1333LV4GK 4GB DDR3 PC3-10666 $120.10
    nGear ALL-IN-1 Flash Card Reader USB2.0 $5.99
    A-DATA 4GB SDHC Class 6 Flash Card $11.99
    Thermaltake TR2 430W Power Supply $29.99

    SUBTOTAL: $424.06

    This motherboard was on sale and sports a total of 7 SATA ports and 1 eSATA and the USB 3.0 bonus. Should be able to undervolt a little but we'll check the power savings at stock and see what we can do thanks to the Blue Planet (kill a watt style) thingy. Unfortunately Gigabyte uses a Realtek NIC I'll try it and see if upgrading is necessary. At the very least it can wait.

    Went with the cheap flash card reader-as-a-hard drive idea as I don't have an all in one card reader on ANY of my machines and sometimes it would come in handy.

    I know.. TR2 430W what were you thinking? I was thinking $10 mail in rebate makes this a $20 part. If, or when, my preferred local suppliers come up with a Fortron 220 or something similar I will buy 3 of them for my HTPC, Router, and this Server build. Maybe even a pico psu for the router. I think the lack of low power efficient psus thing is related to the fact that people on the west coast of Canada pay so little for electricity that it just doesn't matter that much. Hopefully this will change as the government is planning to build another hydro electric dam and people are discussing the 'greeness' of it. Maybe we will lean towards conservation.

    I have a couple of 1tb hdds in RAID 0 which have been replaced with an SSD and 1 caviar black. They are standard 7200 rpm (seagate .12s I think) and a 1tb caviar grean eads. I'll try these three in a geom raid 5 or, with luck a zfs zraid1. Thanks to Sub Mesa for the handy dandy link to his personal archives via adam-the-kiwi's zfs thread.

    Once this thing is in service I will grab 4 2tb drives and make a proper server. The various 1tbs will probably retire... for now. Also, I am using an old hp case that a friend gave me but I may break down and buy an Antec 300 on sale or something similar.

    I hope to post power consumption info as well as benchmarking but... don't hold your breath. We are hoping to build the house for which this server is intended by Christmas but permits etc. are painfully slow. In the mean time this will not have a permanent home and may even spend time in storage.
  7. Alright well if you need help with anything else or when you're actually going to implement your solution, be sure to ask for help if you think you can use it.
  8. Update on Power Consumption

    Put everything together to ensure it works and loaded FreeNas 0.7.1 onto a USB stick.

    With the power off (although the bios had the WOL wake on lan? option enabled) the thing was pulling 9 Watts. I notice the USB port was still getting power too.

    BTW, I used something from Canadian Tire which is called a Blue Planet Energy Meter. It's like a kill-a-watt but was easier for me to obtain (and only $25). Not sure of the accuracy.

    So, I unplugged the cdrom and booted up with no spinning disks. Only the TR 430w psu, motherboard w/ stock intel cooler and 4gb ram all at stock settings, and cheapo kingston 4GB USB stick, oh and the 120mm case fan. Power use at idle was 65 watts. Not the ultra low power system Tom would build at less than 25w but hey, I have a $20 psu and some reasonably high powered equipment running. Maybe later I can try undervolt settings.

    Then, I added the 2 drives pulled from my 'gamer/everyday' system. They turned out to be Seagate LP 1Tb drives. Just tossed them in the case and plugged 'em in. Idle wattage at 72 W. Not bad. The peak during start-up was only 92W so I can easily live with a 220watt 80plus psu even after adding some proper storage. and allowing for capacitor degradation over time.

    I'm really interested in this thing I stole from a sub mesa post in another thread.

    In a raid-z arrays; all disks should be the same size. However, you can create multiple arrays and combine them all in one pool, so you get:
    raidz (disk1 disk2 disk3 disk4)
    raidz (disk5 disk6 disk7 disk8)

    Now disks 1-4 could be 1TB disks and 5-8 could be 2TB disks. These two arrays are RAID0'ed for increased performance and can use capacities of array1+array2. It is like a RAID50. So this allows you to fix different size of drives; but you have them in badges and they have to be redundant arrays. They can vary in size that is no problem. :)

    Since my aforementioned drives turned out to be LP model seagates I would like to buy a couple more of the 1 tb versions to make a proper zraid but, when expansion time comes, I would like to add some 2tb (or higher? at the time) to the array. Originally I had planned to dump the original array and go with the larger one but, with cool quiet LP drives I would just keep them.

    So, I read the above quote to mean that I can have a zraid array with 4 x 1tb drives. Then later I add a zraid array with 4 x 2tb drives and these arrays can be striped like a Raid 50? I don't get it but I love the idea. Grab the striping benefits of speed whilst retaining the redundancy of zraid AND mixing up the drive sizes. Seems too good to be true.
  9. Yes that is how it works. The only way it can work like that is when the RAID engine and Filesystem are one thing, and that is the case with ZFS.

    About your power consumption; probably due to the cheap power supply; generic power supplies become near 50% efficient at very low loads (<40W). Virtually no PSU gets 80% efficiency at 5% load. That's why i prefer a PicoPSU when possible - but only when not using 3,5" desktop class HDDs.
  10. Using the NAS with my two Seagate LP 1tb drives in JBOD. Just turning it on long enough to back up each computer one by one.

    So, copied my wife's laptop backup files (As these are important to her there is no compression just a file copy). The folder is 72GB and includes one large complete hd backup and a folder full of excel documents or whatever she has there.

    72GB took about 25 minutes. The 'speed' and timing thingy in the windows pop up started at about 65MB/s and slowed to maybe 58 near the end. I'm very happy with that considering the write speed of a 5900 rpm low power drive. Can't wait to buy a couple more drives and build a ZFS pool.

    Anyway, the project is just waiting for a house to be built around it. Next update may not be for 6 months.
  11. Nice! Keep in mind higher performance often requires a different protocol than SMB/CIFS (Samba) also known as Windows Filesharing. The NFS or Network FileSystem protocol often allows for higher speeds. Windows 7 includes an optional NFS client, you can enable it in the add/remove windows options somewhere in the Software screen.
  12. Only Windows 7 Ultimate has the NFS client. I wanted to share some things really quickly from some VM's I was playing with and ran into this problem as I only have professional edition. To bad I didnt notice this difference when I was comparing the two. I've not found a free solution yet. But, yes, NFS should be a little faster than samba/cifs.
  13. Wow, it's true. I don't have the option to enable the NFS client. It's supposed to be in Programs and Features, then turn on windows features... Simply not there.

    Will concern myself with this more if it becomes a limiting factor.
  14. With this setup. How bulletproof is it?

    I admit, I am new and just found out about ZFS last night looking again for a decent RAID 10 NAS setup but I am still a bit unclear about how much better ZFS is over regular RAID. :\ I will visit the wiki page later tonight but I wanted to post here in case I forgot to read.

    Also, ECC ram never occured to me. So this setup you have does not use it I take it?

    Also, on your hard drives that you picked up recently. I had read in another post somewhere where the new WD drives ending with EARS do not have some sort of Time waiting Error (TLER or something, touretts?) that the drives ending in EADS do. Is that a problem with this type of setup or do you not need to worry about that because that is a RAID thing and not a ZFS raid thing?

    Yeah, I am the noob but I am looking for some clarification before I move ahead.

    Oh, here is part of what I was skimming on

    Could be nothing at this point but I wanted to ask since yall know more than me here at this point.

  15. How bulletproof? Well, I can't really think of any single failure that cannot be recovered. This is a home NAS and not a NASA NAS. So the hardware is not 'bulletproof' but I'm happy with the redundancy and inherent data corruption protection this system offers.

    It took me a bit to get over the ECC thing. Intel boards (other than xeon level) don't seem to support ECC. So, you can go with AMD or... However, one of the huge advantages to ZFS is the built in error correction. You see, in a RAID situation where there are terabytes of information that are easily susceptible to single sector corruption which may affect your information it becomes very important to ensure the data you 'save' is exactly the data that gets 'saved'. Hence error correction. Ultimately one should use ECC ram to protect the data going in and out of your ram buffer and ZFS to protect data going to disk. However, simply using ZFS is multitudes better than using a basic RAID level already. So, how far do you take it?

    TLER is something that enterprise level drives do to let a raid controller know when they are crunching through a bad sector. Again, ZFS eliminates the need for this as we are using a software 'raid' which is much more forgiving. The danger (as your thread mentions) is that a non-tler WD drive will hit a bad sector (and 2tb drives have plenty of sectors) and after some short period of time the controller will report that it has a 'bad' disk. It may not be bad (by my standards) but it has timed out the controller and now your array is degraded. Worse, you may try to rebuild it and a second drive craps out.

    Basic ZFS zraid is like raid 5. There is some redundancy already. Zraid 2 is like RAID 6 where you have 2 disk failure redundancy. Me, I'm willing to live with the first. Again, this isn't mission critical stuff and if I have to run for a week with a degraded array while I buy another disk? Fine. I plan to have some kind of cheap NAS backup in the garage anyway. Just in case. As I said before I am quite sure I can handle any single hardware failure that I am aware of. An 'offsite' backup is one more level to protect against theft/fire/etc.
  16. Nice post, adampower!
Ask a new question

Read More

NAS / RAID LAN Storage