10GbE Switch XS708E Connecting Server w/2 10GbE RJ45 Ports & HPZ820 w/1 10GbE RJ45 Ports
Tags:
- workstaiton
-
Servers
- Switch
- Netgear
- Networking
- 10gbe
- xs708e
- 10gbe switch
- Port
Last response: in Networking
computergiant
February 10, 2014 8:52:30 AM
Hello,
I am connecting a new server Dell 620 w/2 10GbE ports from a Broadcom 57810 Dual Port 10Gb Base-T Network Adapter, and a new HP Z820 w/1 10GbE ports thanks to an Intel X540-T1 10Gbe Network Card to a 10GbE swtich.
These are connecting to the Netgear XS708E 8 port 10GbE switch. I would like to do link aggregation so the server is running both of its 10GbE ports together as a single 20GbE port.
How do I do this with the equipment stated above? Thanks!
I am connecting a new server Dell 620 w/2 10GbE ports from a Broadcom 57810 Dual Port 10Gb Base-T Network Adapter, and a new HP Z820 w/1 10GbE ports thanks to an Intel X540-T1 10Gbe Network Card to a 10GbE swtich.
These are connecting to the Netgear XS708E 8 port 10GbE switch. I would like to do link aggregation so the server is running both of its 10GbE ports together as a single 20GbE port.
How do I do this with the equipment stated above? Thanks!
More about : 10gbe switch xs708e connecting server 10gbe rj45 ports hpz820 10gbe rj45 ports
bill001g
February 10, 2014 9:37:05 AM
What is your question. Microsoft calls this nic teaming. They have pretty good instructions on their site. The netgear should be as simple as assigning the ports to the same group. It does not appear this switch supports LACP so you will have to do it manually.
Be aware link aggregation only helps if you are communicating with many machines from the aggregated machine. It is pretty stupid in its method of path selection so you will never get even utilization on the interfaces.
Be aware link aggregation only helps if you are communicating with many machines from the aggregated machine. It is pretty stupid in its method of path selection so you will never get even utilization on the interfaces.
m
0
l
computergiant
February 10, 2014 10:29:09 AM
bill001g said:
What is your question. Microsoft calls this nic teaming. They have pretty good instructions on their site. The netgear should be as simple as assigning the ports to the same group. It does not appear this switch supports LACP so you will have to do it manually.Be aware link aggregation only helps if you are communicating with many machines from the aggregated machine. It is pretty stupid in its method of path selection so you will never get even utilization on the interfaces.
Thanks for the response. My question really is "What is the benefit of doing link aggregation on this switch between the server and workstation?" I have done NIC teaming before but that's not link aggregation from my understanding.
I am unfamiliar with Link Aggregation in practice but it looks to be implemented quite often in VM solutions. The server has VMWare Essentials installed on it and hosts at least 3 virtual machines running Windows 2008 Server in their normal disparate roles (i.e. database server, exchange server, active directory server, etc.).
The data on the workstation is backed up nightly to the server. The data is a lot of AutoCAD models and SQL databases.
So would it be beneficial to do Link Aggregation between the server VMs and the workstation? We're talking sometimes 1TB in data backed up per evening depending upon the backup model that day.
m
0
l
Best solution
bill001g
February 10, 2014 11:34:44 AM
From what I can tell nic teaming and link aggregation are the same thing but you never really know with microsoft it seems to partially depend exactly which release you have. The very newest ones I know support true link aggregation.
Lets take a different case. You have 2 servers both with 2 10g ports so each has a 20g connection to the switch. The key problem with this is they pick one of the 2 10g ports based on ip or mac or whatever. This means that for a session between 2 device it will always pick the same path. So it will only only use 1 of the 10g connections and leave the other unused.
In the case of have 1 machine at 20g and the other at 10g you will be limited to 10g anyway.
Still even at the 10g speeds you many times will be slowed down by bottlenecks in the disk subsystem
Lets take a different case. You have 2 servers both with 2 10g ports so each has a 20g connection to the switch. The key problem with this is they pick one of the 2 10g ports based on ip or mac or whatever. This means that for a session between 2 device it will always pick the same path. So it will only only use 1 of the 10g connections and leave the other unused.
In the case of have 1 machine at 20g and the other at 10g you will be limited to 10g anyway.
Still even at the 10g speeds you many times will be slowed down by bottlenecks in the disk subsystem
Share
computergiant
February 10, 2014 11:47:23 AM
bill001g said:
From what I can tell nic teaming and link aggregation are the same thing but you never really know with microsoft it seems to partially depend exactly which release you have. The very newest ones I know support true link aggregation.Lets take a different case. You have 2 servers both with 2 10g ports so each has a 20g connection to the switch. The key problem with this is they pick one of the 2 10g ports based on ip or mac or whatever. This means that for a session between 2 device it will always pick the same path. So it will only only use 1 of the 10g connections and leave the other unused.
In the case of have 1 machine at 20g and the other at 10g you will be limited to 10g anyway.
Still even at the 10g speeds you many times will be slowed down by bottlenecks in the disk subsystem
Okay! So Link Aggregation is the same as NIC Teaming, which I have implemented before. Just was confused with the new terminology but http://thejimmahknows.com/link-aggregation-with-lacp-an... cleared that up.
It will be helpful in our implementation since the server is not just connected to this workstation but also to others via another switch running at 1GB.
Link Aggregation Group (LAG) – is the set of physical ports, connections, etc contained within a logical group.
EtherChannel/PortGroup/NIC Teaming – are all vendor specific terms for LAGs. The underlying protocol used is explicitly configured by the administrator.
m
0
l
You should be able to set up NIC teaming on your server ethernet ports, but technically the switch doesn't support it, meaning that you have to have an operating system that can set up NIC teaming as "switch independent" not requiring LACP. Right now the only one I know of that does this is Server 2012 or Server 2012 R2. Otherwise even though your computer will recognize that you have put your two physical ethernet ports together into a single virtual NIC, your switch won't have any clue and will still only see the two physical ports independently meaning configuring teaming does no good.
I've set up teams within Windows Server 2012 for Hyper-V on switches that don't support LACP and it still works pretty good. However, as stated above, it's still limited to 1 Gbps per connection (or whatever the speed is of your network switch port.) This means that in your case while the server could still take in 20 Gbps, each computer connection will be a max of 10 Gbps getting back to the server no matter what. The Hyper-V teaming feature automatically load balances multiple virtual ethernet adapters for virtual machines across your physical interfaces. In other words, for your scenario, if you set up both 10 Gbps NICs in a team, it might detect a sudden huge influx of data from your workstation and move the virtual NIC over to one of the two physical NICs, and move all other virtual machines to the other 10 Gbps NIC. Still, the network traffic is a maximum of a single interface throughput, though.
I've set up teams within Windows Server 2012 for Hyper-V on switches that don't support LACP and it still works pretty good. However, as stated above, it's still limited to 1 Gbps per connection (or whatever the speed is of your network switch port.) This means that in your case while the server could still take in 20 Gbps, each computer connection will be a max of 10 Gbps getting back to the server no matter what. The Hyper-V teaming feature automatically load balances multiple virtual ethernet adapters for virtual machines across your physical interfaces. In other words, for your scenario, if you set up both 10 Gbps NICs in a team, it might detect a sudden huge influx of data from your workstation and move the virtual NIC over to one of the two physical NICs, and move all other virtual machines to the other 10 Gbps NIC. Still, the network traffic is a maximum of a single interface throughput, though.
m
0
l
computergiant
February 11, 2014 8:50:26 AM
choucove said:
You should be able to set up NIC teaming on your server ethernet ports, but technically the switch doesn't support it, meaning that you have to have an operating system that can set up NIC teaming as "switch independent" not requiring LACP. Right now the only one I know of that does this is Server 2012 or Server 2012 R2. Otherwise even though your computer will recognize that you have put your two physical ethernet ports together into a single virtual NIC, your switch won't have any clue and will still only see the two physical ports independently meaning configuring teaming does no good.I've set up teams within Windows Server 2012 for Hyper-V on switches that don't support LACP and it still works pretty good. However, as stated above, it's still limited to 1 Gbps per connection (or whatever the speed is of your network switch port.) This means that in your case while the server could still take in 20 Gbps, each computer connection will be a max of 10 Gbps getting back to the server no matter what. The Hyper-V teaming feature automatically load balances multiple virtual ethernet adapters for virtual machines across your physical interfaces. In other words, for your scenario, if you set up both 10 Gbps NICs in a team, it might detect a sudden huge influx of data from your workstation and move the virtual NIC over to one of the two physical NICs, and move all other virtual machines to the other 10 Gbps NIC. Still, the network traffic is a maximum of a single interface throughput, though.
Thank you for the great information. The server runs 2012 R2 on its VMs so that is a bonus. I am a bit confused about the switch's specs and capabilities since the switch specs say it does support Port Trunking (Link Aggregation LAG) and includes the # of LAGs & # of members as 4 LAGs & 2 to 4 members. The switch is supposed to support it via Static Link Aggregation and not dynamic (LACP). My problem is that I don't have the switch to discover how to setup this Static Link Aggregation although it will be here by the end of the week. The server itself + workstation won't be in until next Wednesday so I won't have all the hardware to test this out until then. My hope is that it does open up a 20GbE pipe on the switch itself but I have no idea (as evidence of me posting this topic).
I do like the VM having the capability to push traffic off of one nic to free up the bandwidth on the other one for any huge file transfers. Thank you for explaining that. I'll mention this to our VMWare guy so it gets properly set up.
m
0
l
Unfortunately, your server running Server 2012 R2 in the virtual machine doesn't accomplish what you want for link aggregation at your server. That has to be done at the physical level on your system, which, if you are running VMWare, means you don't have that option. It requires whatever capabilities VMWare has for your link aggregation, which I believe is switch-dependent only.
The good news is, if your switch is manageable to be able to set up LACP groups, then you should be able to set up a full 20 Gbps connection through between two devices if everything is set up with LACP compatible and you use the right address hash method for aggregating the two connections together. I haven't done this with 10 Gbps equipment, but I have done it on gigabit equipment. Between two servers, both NICs set up in Windows Server 2012 with teaming, switch-dependent LACP address hash, and you can push the full 2 Gbps in file transfer between the two servers.
The good news is, if your switch is manageable to be able to set up LACP groups, then you should be able to set up a full 20 Gbps connection through between two devices if everything is set up with LACP compatible and you use the right address hash method for aggregating the two connections together. I haven't done this with 10 Gbps equipment, but I have done it on gigabit equipment. Between two servers, both NICs set up in Windows Server 2012 with teaming, switch-dependent LACP address hash, and you can push the full 2 Gbps in file transfer between the two servers.
m
0
l
computergiant
February 11, 2014 11:41:27 AM
choucove said:
Unfortunately, your server running Server 2012 R2 in the virtual machine doesn't accomplish what you want for link aggregation at your server. That has to be done at the physical level on your system, which, if you are running VMWare, means you don't have that option. It requires whatever capabilities VMWare has for your link aggregation, which I believe is switch-dependent only.The good news is, if your switch is manageable to be able to set up LACP groups, then you should be able to set up a full 20 Gbps connection through between two devices if everything is set up with LACP compatible and you use the right address hash method for aggregating the two connections together. I haven't done this with 10 Gbps equipment, but I have done it on gigabit equipment. Between two servers, both NICs set up in Windows Server 2012 with teaming, switch-dependent LACP address hash, and you can push the full 2 Gbps in file transfer between the two servers.
Great info. It lead me to this article about VMWare which provides a method for implementing static link aggregation on the VMs https://blogs.vmware.com/vsphere/2013/01/vsphere-5-1-vd...
I'll post back our final process for the solution in a couple of weeks.
m
0
l
Read discussions in other Networking categories
!