10 Gigabit Ethernet Versus Wireless-AD

10 gigabit Ethernet would be the fastest, most reliable connection (that doesn't require fiber) but also the most expensive.

10 Gigabit Ethernet / 10BASE-T

D-Link 10-Port 10-Gigabit Ethernet Smart Managed Switch $781
https://www.newegg.com/Product/Product.aspx?Item=N82E16833127710

10Gb Single-Port PCI Express x8 Network Interface Card $190 x 2 = $380
https://www.newegg.com/Product/Product.aspx?Item=9SIABG84RR5657

Or roughly $1161 to connect my NAS4FREE Backup server to my main computer with 10Gbase-T.

10 Gigabit is 1250 megabytes a second maximum.

Wireless-AD

TP-LINK Talon AD7200 Multi-Band Wi-Fi $329
https://www.newegg.com/Product/Product.aspx?Item=N82E16833704301

The computers are in the same room so we should not have to worry about range.

I am unable to find any adapters for usb 3.1 to Wireless AD.
Anyone know where to buy them?

For a total cost of $329 + ?x2, which already starts to look vastly cheaper if we assume maybe $70 each for the adapters or $140 bringing the educated guess total to $469.

The 7200 megabit bandwidth is split between 3 bands:
60 GHz (4600 Mbps)
5 GHz (1733 Mbps)
2.4 GHz (800 Mbps)
I assume I can't "bond" the bands and am forced to choose the single 60 GHz (4600Mbps) channel.

4600 Megabit / second is 575 megabytes a second, which is an important value as discussed in the next section. I would assume there is 0 interference at the 60 GHz level making data transfers relatively smooth and predictable.


My gaming computer, that I also use for storage, has 5 - 4 terabyte Western Digital Red Pros in a raid 5
My NAS4FREE backup server uses 6 - 8 terabyte Western Digital Red Pros in a zraid2, similar to raid 6.

https://calomel.org/zfs_raid_speed_capacity.html
6x 4TB, raidz2 (raid6), 15.0 TB, w=429MB/s , rw=71MB/s , r=488MB/s
Shows a write speed of 429 MB/s with 6 drives in a zraid2, although with 4 terabyte drives.
I assume 6 - 8 terabyte drives should be comparable in speed if not faster.

Since the maximum expected write speed of my Nas4free server is lower than the speed of Wireless-AD (575 Megabytes a second) then it should not affect my arrays maximum transfer speed.

I could reconfigure my NAS4FREE server to use a configuration similar to raid 50 which would exceed Wireless-AD's maximum speed, but I don't see myself moving away from raid6/zraid2.
If anything I might move up to 12 drives in a zraid3, 3 drives failures allowed.


Should I go the Wireless-AD route which mathematically should allow me to write as fast as I possible could with present hard drives and hopefully save some money, assuming usb 3.1 to wireless-ad aren't $200 each

Or go the expensive 10base-t route just in case 10 terabyte ssds become affordable with QLC?







 
Solution
I went this route: https://www.amazon.com/XG-U2008-Unmanaged-2-Port-8-Port-Gigabit/dp/B01LZMM7ZO/ref=sr_1_1?ie=UTF8&qid=1507857287&sr=8-1&keywords=asus+10gb+switch

linking my NAS and server to 10gb and leaving the rest of the network with gigabit (mind you more switches inbetween....)

edit:
Also using the $99 Asus 10gb NIC's, now I'd usually go Intel Nic but as always suckers are way more expensive. I am however NOT able to saturate the 10gb port using my current raid setup, I would probably need an SSD array to do that, or have somekind of SSD cache.

Supermuncher85

Distinguished
I went this route: https://www.amazon.com/XG-U2008-Unmanaged-2-Port-8-Port-Gigabit/dp/B01LZMM7ZO/ref=sr_1_1?ie=UTF8&qid=1507857287&sr=8-1&keywords=asus+10gb+switch

linking my NAS and server to 10gb and leaving the rest of the network with gigabit (mind you more switches inbetween....)

edit:
Also using the $99 Asus 10gb NIC's, now I'd usually go Intel Nic but as always suckers are way more expensive. I am however NOT able to saturate the 10gb port using my current raid setup, I would probably need an SSD array to do that, or have somekind of SSD cache.
 
Solution
The wired solution would be much faster. First the 60ghz (4600Mbps) is a theoretical speed. Its a layer2 connection rate but not throughput. Throughput will be significantly less (like all other wireless standards) because of high overhead with wireless. What is the actual throughput? I don't know as the standard is so new, but it is less than the theoretical. Next, if you hook one of the devices to the router with ethernet, you will be constrained by the 1Gbps ports on it. If you use both devices wirelessly through the router, you will cut your throughput in half because the router can only talk to one device at a time (the Mu-MIMO only works on the 2.4 and 5ghz bands). In other words your not going to get near the throughput you think from the wireless solution.
One solution if you only need real high speed between one computer and your NAS is to just add a 10GB card to each device and run a cable in between for a direct connection. Then use regular Gigabit cards in the NAS and computer to connect to your network. That way you don't have to purchase a 10Gb switch right now.
 


I use Crashplan Business to backup my data to my local NAS4FREE server as well as to their servers.

Once a month it does a thing called "Deep Maintenance"
https://support.code42.com/Administrator/5/Monitoring_and_managing/Archive_maintenance

The most resource intensive part is "Validate block checksums for the entire archive"
meaning the entire archive is fed into my main computer's ram and has each files checksum checked to make sure their are no errors.

"Code42 CrashPlan requires fast disk I/O. You may see performance bottlenecks on network storage, such as a NAS or a SAN that is not fiber attached. Consider moving to storage devices with higher throughput and lower latency."

Is what prompted this question.

To answer getochkn's question, technically I don't write large quantities of data.
But I need to read 8 terabytes of data every 28 days during deep maintenance.


PS: I do see the irony of Crashplan checksumming data on a zfs file system.
My checksums are quite literally checksummed.
 


That ASUS XG-U2008 looks awesome and much cheaper than the full 10 gigabit switch I found.

I may end up buying that and waiting on a full 10 gigabit switch till it becomes more mainstream.

My NAS4FREE server has something even better than an ssd cache.

An ARC which uses ram on the nas4free server as a read cache, which as you guessed is able to fully saturate a 10 gigabit or even a 40 gigabit link assuming the data is in the ARC.

You can use an ssd to extend the ARC, a so called L2ARC, for even more cache if needed.

It is fairly easy to fully saturate gigabit with 6 drives, reading or writing.

Can't wait to see those 10 gigabit bursts when the data I need rests in ARC.


 


I swear I remember reading wireless-ac was full duplex which made me believe Wireless-AD was also full duplex.

So much for that wireless dream.

It might have actually been a dream lol.