Yeah, I've noticed Plex is a bit of a CPU hog (relatively speaking). I'd go with an i3 with ECC support. Don't think they make an i5 with ECC support. A Xeon is only necessary if you're going to do other things like run virtual machines on this, or run a dozen simultaneously transcoding Plex streams.
https://ark.intel.com/search/advanced?ECCMemory=true&MarketSegment=DT
I seriously doubt you'll need 32GB for FreeNAS. Gobs of RAM are only needed for ZFS if you use deduplication (it has to store the checksum for every file on disk in memory, so it can immediately check to see if a new file you're uploading is a duplicate). Deduplication is a huge, huge memory and CPU hog - my upload transfer speeds were an atrocious 25 MB/s and CPU pegged at 100% with it on. Turning it off resulted in 90 MB/s upload speeds with almost no CPu use. I recommend turning it off unless you absolutely need it (e.g. business server with a hundred duplicate workstation virtual machines). Dedup was only saving me about 3% of my disk space (ZFS reports this sort of stuff to you). File compression actually ended up saving me more space (5%) for much less CPU and RAM use, especially when I set aside a special partition with the highest level compression available and stored compressible archives there (16% space reduction).
Without deduplication, my FreeNAS has been running happily with 5 GB of RAM (I actually run it in a virtual machine - in hindsight I wouldn't recommend that, but that's how I set it up and I've been too lazy to "fix" it). I look at the memory use logs and it slowly ramps up to 5 GB as I read/write files on the server, then when it hits 5 GB it purges the cache dropping back down to about 2GB, and slowly builds up again.
ZFS with redundancy (raidz) is better than RAID. Its redundancy is block-level instead of disk-level. If two sectors on two disks fail and they just happen to be for the same file, ZFS will mark just that one file as unrecoverable. RAID will dump your entire array and you will lose all your data.
ZFS will also check every file on the server once every 35 days (called a scrub) - checking the data on disk against its checksum. If it notices something has changed (bit rot), it will cross-check the parity data against the checksum, and correct the error. RAID will just say a disk has failed. (This is why ECC RAM is important - if a memory error tricks it into thinking a file on disk is corrupt, ZFS can actually end up corrupting a good file).
ZFS also supports snapshots, so you don't have to backup your file server as often (believe me, backing up 8 TB is a PITA - takes nearly 24 hours to even do a incremental backup since that still requires every file be read to generate a checksum to confirm it hasn't changed from the version in the backup).
ZFS is also agnostic to storage medium. If you want to mix different size disks together, you can (though same-size parity restrictions still apply). Use the extra space on the larger drives to form other raidz arrays. If you want to mix different types of media together, you can. You can do crazy things like link a disk, a file on another disk, a SAS network drive, and a tape drive together to form a raidz array. ZFS doesn't care that they're different types of media. It just requires they each have the same amount of storage.
The only real drawback I've found with ZFS is that it's difficult to expand your storage. It's easy to add storage, but if you've got a 8 TB raidz array you'd like to expand to 12 TB, it's difficult to do without just adding more drives.
Edit: The SSD isn't really necessary. But if you have one, you can assign portions of it to cache the ZFS arrays. There are two types of caching (file cache, index cache - not their real names, I don't remember what ZFS calls them), and the size requirements and usefulness is pretty complicated. So read up on them thoroughly before deciding how to use the SSD as cache. Why would you want a SSD cache? Remember that HDDs, even when linked together in an array, have horrendous small file read/write speeds of about 1 MB/s. SSDs are dozens or hundreds of times faster at small file read/writes. So a SSD cache allows all my file transfers to be close to Gigabit ethernet speeds, instead of always being limited by HDD speed. I would say booting off a USB flash drive or SSD is preferred, just to avoid the FreeNAS installation being tied to one of your data drives.