Technical Specifications
MORE: All Storage Content
MORE: Latest Storage News
MORE: Storage in the Forums
The FreeNAS Mini uses components chosen for stability, performance and reliability, including eight-core Intel Avoton processor that only consumes 17 watts. The efficient processor works in conjunction with a minimum of 16GB of ECC DDR3 DRAM. Since the FreeNAS operating system utilizes the DRAM as a cache for in-flight data, you'll want to stick with ECC memory to protect that information from corruption.
Two Intel gigabit Ethernet ports on the back of the system let you run the storage server on two independent networks or together in a teamed configuration. If your network is compatible, 802.11ad is supported, as well as other configurations like round-robin. The system also has an out-of-bandwidth dedicated management port (IPMI) for console access and motherboard configuration changes. iXsystems also offers the system with dual-port 10GbE as an upgrade option.
Four external, hot swap HDD drive sleds hold your storage disks. iXsystems uses Western Digital Red products for the FreeNAS Mini family. Two internal 2.5" drive bays hold ZIL and L2ARC cache drives, but these are add-on options. The operating system resides on a dedicated SATA DOM.
Over the last year, SSD cache has gained popularity in low-cost NAS systems used for virtual machine storage and databases. Processing performance is up significantly while prices are down. In fact, performance is so good now that multiple high-I/O applications can run on the same system. SSD cache increases I/O throughput to match host processing capabilities.
FreeNAS gives users the ability to accelerate two specific areas with SSDs: ZIL and L2ARC. Joshua Paetzel, iXsystems Senior Engineer, explains the two cache system best.
ZIL Devices: ZFS can use dedicated devices for its ZIL (ZFS intent log). This is essentially the cache for synchronous writes. Some workflows generate very little traffic that would benefit from a dedicated ZIL, others use synchronous writes exclusively and, for all practical purposes, require a dedicated ZIL device. The key thing to remember here is the ZIL always exists in memory. If you have a dedicated device, the memory ZIL is mirrored to the dedicated device, otherwise it is mirrored to your pool. By using an SSD, you reduce latency and contention by not utilizing your data pool (which is presumably comprised of spinning disks) for mirroring the in-memory ZIL.
There’s a lot of confusion surrounding ZFS and ZIL device failure. When ZFS was first released, dedicated ZIL devices were essential to data pool integrity. A missing ZIL vdev would render the entire pool unusable. With these older versions of ZFS, mirroring the ZIL devices was essential to prevent a failed ZIL device from destroying the entire pool. This is no longer the case with ZFS. Missing ZIL vdevs will impact performance but will not cause the entire pool to become unavailable. However, the conventional wisdom that the ZIL must be mirrored to prevent data loss in the case of ZIL failure lives on. Keep in mind that the dedicated ZIL device is merely mirroring the real in-memory ZIL. Data loss can only occur if your dedicated ZIL device fails and the system crashes with writes in transit in the unmirrored memory ZIL. As soon as the dedicated ZIL device fails, the mirror of the in-memory ZIL moves to the pool (in practice, this means you have a window of a few seconds where a system is vulnerable to data loss following a ZIL device failure). After a crash, ZFS will attempt to replay the ZIL contents. SSDs themselves have a volatile write cache, so they may lose data during a bad shutdown. To ensure the ZFS write cache replay has all of your in-flight writes, the SSD devices used for dedicated ZIL devices should have power protection. HGST makes a number of devices that are specifically targeted as dedicated ZFS ZIL devices. Other manufacturers like Intel offer appropriate devices as well. In practice, only the designer of the system can determine if the use case warrants a professional enterprise-grade SSD with power protection or if a consumer-level device will suffice. The primary characteristics here are low latency, high random write performance, high write endurance and, depending on the situation, power protection.
L2ARC Devices: ZFS allows you to equip your system with dedicated read cache devices. Typically, you’ll want these devices to be lower latency than your main storage pool. Remember that the primary read cache used is system RAM, which is orders of magnitude faster than any SSD. If you can satisfy your read cache requirements with RAM, you’ll enjoy better performance than if you use SSD read cache. In addition, there is a scenario where an L2ARC read cache can actually drop performance. Consider a system with 6GB of memory cache (ARC) and a working set that is 5.9GB. This system might enjoy a read cache hit ratio of nearly 100%. If SSD L2ARC is added to the system, the L2ARC requires space in RAM to map its address space. This space will come at the cost of evicting data from memory and placing it in the L2ARC. The ARC hit rate will drop, and misses will be satisfied from the (far slower) SSD L2ARC. In short, not every system can benefit from an L2ARC. FreeNAS includes tools in the GUI and at the command line that can determine ARC sizing and hit rates. If the ARC size is hitting the maximum allowed by RAM, and if the hit rate is below 90%, the system can benefit from L2ARC. If the ARC is smaller than RAM or if the hit rate is 99.x%, adding L2ARC to the system will not improve performance. As far as selecting appropriate devices for L2ARC, they should be biased towards random read performance. The data on them is not persistent, and ZFS behaves quite well when faced with L2ARC device failure. There is no need or provision to mirror or otherwise make L2ARC devices redundant, nor is there a need for power protection on these devices.