I'm a relative newbie to the forums--but I did do a forum search and didn't find what I was looking for on these topics:
1. I understand that "Link Aggregation" (802.3ad) will 'effectively' double your bandwidth between two devices. Do both devices have to be 802.3ad 'ready' to be able to do this? For instance, if I have a Dell Powerconnect 2716 (A Gigabit Managed Switch) which incorporates the 802.3ad standard--does the device on the other end also have to have the 802.3ad standard? (e.g. Connecting the Dell 2716 to the two gigabit ports on Dell 2324 unmanaged switch....or connecting to a PC with two Gigabit NIC ports/cards in it?)
2. Similar question to 'Jumbo Frames'. Alot of newer gigabit switches are supporting Jumbo frames. Do the other components along the data stream also have to support Jumbo Frames as well? (i.e. the NIC cards in a PC, other sub-switches, etc.). And is there any special setup that needs to occur (or do you just plug stuff in and 'drag and drop' files to use the Jumbo Frames feature?
More about :link aggregation jumbo frames seperate questions
1. Link aggregation can be used on one end of the communication, and it's in this way that it's most commonly used -- the server has multiple ports aggregated to the switch, and several clients, each with single ports. Each conversation typically runs on a single port in any case, so the aggregation is on the server side, not throughout the system.
It is however typical to link switches together with multiple ports, and for that you should have trunking support on both sides.
BTW, from what I've briefly seen, the Dell 2716 has a pretty dumb LAG distribution protocol -- all conversations are received on the first port in a LAG group. Of course the LAG group NICs can decide to transmit on different ports, according to the NIC drivers.
I'm not sure if you were asking this, but certainly you need some software support for aggregating multiple NICs on a computer. This is typically a NIC-vendor-provided software / driver. In some cases, such as with Linux, the OS provides the bonding support. Again, NICs don't need to be aggregated on both ends of the connection, and it often doesn't help even when they are.
2. Jumbo-enabled NICs need a jumbo-capable path between them, through every switch and router, etc., for a conversation to be jumbo capable, and when the NICs are jumbo-enabled but their paths aren't, there can be problems. There are fewer problems however, when one side is jumbo-enabled and the other side isn't, as part of the TCP communication at the onset "negotiates" the frame size, and so if one says "no can do", then everything uses non-jumbo frames and avoids problems with non-complaint hardware in the path.
For this reason, jumbo-capable devices are generally shipped with jumbo frames disabled; you need to enable this specifically if desired in the network properties, and in the switch properties where applicable for managed switches. The 2716 for example requires managed mode in order to enable jumbo frames.
Note however that internet conversations are typically not jumbo-enabled, so it's also conventional to have jumbo frames utilized in local networks, via jumbo-capable switches while WAN conversations go through more conventional jumbo-incapable routers / paths.
Jumbo frames are non-standard and quite a mess in general, and are worth avoiding in most cases. You don't always get a significant performance gain with jumbo frames, and if you don't, it's certainly not worth the hassle.
1. Trunking--Thank you very much for your explanation! My end goal was to be able to Trunk Two sets of 2 Gigabit ports on the Dell Managed (Dell 2716) switch to the two gigabit ports on either a pair or trio of Dell 2324 unmanaged switches.
What you're telling me is that I can't do this because the unmanaged switches do not support Trunking--and I need it on both switches.
2. Even BETTER explanation on the Jumbo Frames--especially about the paths and problems that can occur when the nodes along the network path are not capable.
My interest in Jumbo frames was purely for speeding up the tranfer of large files. My understanding is that it *can* offer a significant speed boost (so long as the two machines on either end of the transfer are not the bottleneck) at the cost of increased 'latency'.