After doing a lot of research, I came up with a finalized plan for a gigabit network I'm planning out for a client. I'd like some input.
The crux of the network will be a Linksys (a.k.a. Cisco) SRW2024 switch:
http://www.linksys.com/servlet/Satellite?c=L_Product_C2&childpagename=US%2FLayout&cid=1115416901465&pagename=Linksys%2FCommon%2FVisitorWrapper
Which is a layer3 web-managable switch that supports 802.3ad (Trunking/port aggregation). I found it from a wholesaler for $491 CAD, which seems like a STEAL.
The server which has some free PCI slots will get some single or dual Intel Pro MT 1000 NICs which support 802.3ad (NIC TEAMING). This will allow me to connect the server to the switch with multiple NICs working in tandem. Probably 2-4 connections to the switch.
The NICs are $130 CAD for a 5-pack of single-porters.
Cabelling will be handmade Cat6 cabling and cat6 certified ends.
When more ports are needed another 24 port switch (or a 48porter) can be added to the rack and trunked into the main-switch with 8 ports aggregated together to make a 8gbps interconnect.
My main concern is that the "server" comp only has normal PCI 2.2 slots, which I understand have a maximum bandwidth of ~1 gbps. Is that 1 gbps PER SLOT or 1gbps for the entire BUS? (IE if I have 4 NICs in 4 Slots will I get the full 4gbps advantage?)
Thanks,
-Alex
The crux of the network will be a Linksys (a.k.a. Cisco) SRW2024 switch:
http://www.linksys.com/servlet/Satellite?c=L_Product_C2&childpagename=US%2FLayout&cid=1115416901465&pagename=Linksys%2FCommon%2FVisitorWrapper
Which is a layer3 web-managable switch that supports 802.3ad (Trunking/port aggregation). I found it from a wholesaler for $491 CAD, which seems like a STEAL.
The server which has some free PCI slots will get some single or dual Intel Pro MT 1000 NICs which support 802.3ad (NIC TEAMING). This will allow me to connect the server to the switch with multiple NICs working in tandem. Probably 2-4 connections to the switch.
The NICs are $130 CAD for a 5-pack of single-porters.
Cabelling will be handmade Cat6 cabling and cat6 certified ends.
When more ports are needed another 24 port switch (or a 48porter) can be added to the rack and trunked into the main-switch with 8 ports aggregated together to make a 8gbps interconnect.
My main concern is that the "server" comp only has normal PCI 2.2 slots, which I understand have a maximum bandwidth of ~1 gbps. Is that 1 gbps PER SLOT or 1gbps for the entire BUS? (IE if I have 4 NICs in 4 Slots will I get the full 4gbps advantage?)
Thanks,
-Alex