NAS + Server HDD/Storage setup

thegloaming

Prominent
Oct 4, 2017
19
0
510
I'm a newbie setting out to build a homelab server, which would also run FreeNAS in a VM. It would likely also run other VMs/apps (now or later) for software development, data analysis, home automation and IoT, email server, web server etc.

I've been trying to figure out the optimum storage setup for this, and it's causing me much confusion. I think it better first to define the storage setup I am thinking about, and from there gauge what is the best kind of storage I need, and how many instances. I'll try to explain this below, and appreciate your insights (I'm a newbie to this, and might get terms and concepts wrong. Bear with me). I'd appreciate your insights to help me make a qualified decision, and I can accordingly but the optimum parts.

I am thinking of 3 separate storage pools at the moment:

  • A Server OS Pool: This will hold the server image (and possibly also the different VM images and other core files). I am considering to do this using a mirrored pair of 120 GB Samsung 850 EVO solid-state drives (probably partitioned for main server vs. VM images).

    A FreeNAS Pool: This will be the dedicated storage given to FreeNAS VM (via PCI passthrough) - I intend to get a HBA, and connect the drives to FreeNAS via this. I've been reading up bits and pieces on how to configure this (RAIDZ, RAIDZ2, mirrored vdevs,...), and have little idea at the moment as to how this this setup will look. At the moment I do not much much data to store on FreeNAS, but I see that growing over time. So, I'd like a setup where I can add more storage easily down the road. Towards this, I wonder if I should get a HBA with external SAS ports rather than internal ones, and setup this HDD pool on some external hot-swappable/pluggable storage (but I think this might be overkill at the moment). Eventually, if I move the FreeNAS into a standalone bare metal setup, then it would be easy to have the external storage move with it.

    A Server Pool: This would be the storage for everything else running on my Linux server. Initially this would be a single storage setup, but as I might have specific apps/VMs moving away from development/playground stages to production stages, then I would likely need to create separate storage setups for these. I'm thinking of initially storing the HDDs for this inside my main case, and set this up as a RAID array (RAID5, RAID6, or RAID10 perhaps, but I need to do much more analysis here too). [Or can I perhaps store all this within the FreeNAS storage, with the caveat that I must ensure that FreeNAS is up and running before these are? If it's even possible for these apps to access the FreeNAS data]
Does this make some sense?
If the setup does, then I can venture into what HDDs I would need to buy for each.
 
That's actually really close to the way I have mine set up.

  • ■500 GB SSD which acts as the OS drive for ESXi, FreeNAS, and various VMs, as well as ESXi cache partitions.
    ■Some of the excess SSD space holds a disk partition for the VM which sees the most disk activity (video rendering).
    ■4x4GB HDDs set up as a raidz volume in FreeNAS for all my other storage.

It works, but if I had it to do over again, I'd make a second computer the FreeNAS system, and have it run FreeNAS natively instead of as a VM. It's inconvenient having my NAS go down whenever I have to upgrade ESXi or work on my VMs, especially since the NAS holds the install images for those VMs. It sounds like that's your long-term plan, so you're on the right track.

I also took the trouble to configure the HDDs as raw disks for the FreeNAS VM to prevent there being two layers of filesystems on all NAS disk accesses, and so I'd be able to pop the HDDs into a bare metal (non-VM) FreeNAS install (I haven't actually tested this yet, but it should work in theory). It's doable, but it complicates the setup considerably (I have 3-4 pages of notes because there's no way I'd remember it all).

I played around with PCI passthrough (ESXi 5.5), but it decreased my FreeNAS uptime. FreeNAS basically never crashes, but it does develop small errors over time which cause it to slow down. So I reboot it every few months. But after I enabled PCI passthrough, FreeNAS would freeze every 1-3 weeks and I'd have to reboot it to unlock it. I dunno if the VM software just isn't ready, or if it's something about my hardware. I just couldn't use it, and the performance gain was minimal (again, the HDDs are set up as raw disks).

Adding storage to FreeNAS is easy (i.e. adding new drives to create new volumes). Expanding existing storage (adding more or larger drives to try to increase an existing volume) is a PITA. Just a drawback of ZFS. It's been years since I read up on the details so I can't explain why anymore, just be aware that it's easiest just to create new bigger zvol with new drives, and copy all your data from your old zvol over to it

ZFS works best with 2, 3, or 5 drives in a volume. It can do 4 (I'm running 4), just be aware that it's not an optimal number of disks. Again, it's been years since I set it up so I don't remember why, but this is one of the idiosyncrasies. There's a similar weird pattern for more than 5 drives, so read up on it if that's your plan.

Backing up VMs in the free version of ESXi is painful. I haven't found a good solution to it yet. I resorted to creating a NFS shared zvol in FreeNAS, make it available to ESXi (which can mount NFS shares), then I shut down a VM and copy its files to the NFS share. Clumsy but it works. Except I can't backup the FreeNAS VM this way. I think I actually ended up duplicating the FreeNAS VM to act as my "backup." It's small so doesn't take much disk space.
 

thegloaming

Prominent
Oct 4, 2017
19
0
510


Thanks a lot for the detailed and insightful information - this does help me a lot. I'm a newbie both in the Linux server world as well as FreeNAS, and I appreciate all the wisdom coming my way!

I have indeed been thinking of setting up similar to you (but using KVM instead of ESXi) - but I had not realized the annoyance of having the NAS go down whenever you need to upgrade or work on the VMs.

My thoughts on the FreeNAS setup have just been significantly changed based on recent readings (where I better understand the complexities of HDDs vs vdevs vs zpools). While I was earlier thinking of having a zpool based on a vdev with 2x4TB HDDs, and later adding new HDDs or a new vdev, I now see the value of having 6 HDDs from the start for FreeNAS. I also understand I am going to have to spend a lot of time to understand both ZFS and FreeNAS to get a proper system in place. This will need experimenting and tinkering before I have a production ready setup.

So my thoughts now are:
1. Setup a basic storage for the linux server. likely 2x4TB (with redundancy by RAID1 or mirroring).
2. Setup FreeNAS as a VM to learn, experiment, and play. I intend to do this with PCI passthrough - will see how it pans out. Depending on my budget at the time I can do this with 4/6 HDDs in RaidZ2 (likely 6x2TB), and add new vdevs where needed. And backup a lot in case I make errors or the system crashes. At this time still keep the server and freenas pools separate.
3. Once I am confident with FreeNAS, I move it into production with a 6x4TB or 6x6TB setup. And likely go native / bare-metal based on all the advice I read.
4. Once the production FreeNAS is stable, I move my server data to FreeNAS. And then use these server HDDs for something else (if I'm not too old and feeble by then :|)

So that's the free time for the next one year taken care of!