Archived from groups: comp.dcom.lans.ethernet (
More info?)
In article <qLmdnSIR05P2v7zfRVn-uw@giganews.com>,
Will <DELETE_westes@earthbroadcast.com> wrote:
:Clearly the disk and network I/O bottlenecks at the file servers are big.
:But that's another thread.
Excuse me, but that *isn't* "another thread". The process you
describe involves negligable communications between the hosts. This
makes a big difference in the choice of equipment.
If your setup is such that there could be N simultaneous connections
to M servers, and N > M and you are asking us for a design in which
"the network itself is not a bottleneck", then you have an implicit
requirement that the server port must be able to operate at
somewhere between (ceiling(N/M) * 1 Gbps) and (N * 1 Gpbs), depending
on the traffic patterns. We have to know what that peak rate is
in order to advise you on the correct switch. Current off-the-shelf
technologies get you 1 Gbps interfaces on a wide range of
devices, 10 Gbps XENPAK interfaces on a much lesser range of devices;
2 Gbps interfaces are also available in some models -- but if that's
your spec then we need to know so that we rule out devices that
can't handle that load.
But perhaps you are planning to get past 1 Gbps by using IEEE 802.3ad
linking of multiple gigabit ports on the server. If that's the case,
then we need to know that so that we know to constrain to 802.3ad
compliant devices. For example, for several years Cisco has had
it's EtherChannel / GigaEtherChannel technology out that allowed
multiple channels to be bonded together, but that technology predates
the 802.3ad standard. Cisco supports 802.3ad in modern IOS versions,
but the cost of upgrading IOS versions on used devices with the
Ompph! you need is very high -- high enough that it can end up being
less expensive to buy -new- switches than "relicensing" and upgrading
software on used ones. Whereas if you don't need 802.3ad, then
used Cisco equipment could potentially be "relicensed" without
software upgrade.
:The only thing I'm concerned about in the
:current thread is how to cheaply guarantee that the network itself is not a
:bottleneck for the servers processing information that they bring down from
:the file servers.
If your server interfaces are going to run at only 1 Gbps, then
in order to "guarantee" that the network is not the bottleneck
in the circumstance that the devices really will run at "wire speed"
you are going to need 120 servers -- an increase which is going to
seriously skew the switch requirements.
The alternative to all of this, if you are content with your
users sharing 1 Gbps to each server, is to recognize that you do
not, in such a case, need to run all the ports at 1 Gbps wire speed
*simultaneously*. That makes a substantial difference in your choices!!
Your initial stated requirement of 120 hosts at gigabit wire speed
implied to us that the switches had to have an (M * 2 Gbps) switching
fabric per module, where M is the number of ports per switching
module, *and* that the backplane fabric speed had to be at least
240 Gbps (in order to handle the worst-case scenario in which
every point is communicating wire rate full duplex with a port on
a different module.) That's a tough requirement to meet for
the backplane -- a requirement that is very much incompatable with
"minimum cost".
If, though, you requirement is really just that one device at a time
must be able to run gigabit wire rate unidirecitonal with one of
the servers -- that the link must have full gigabit available
upon demand but the demands will be infrequent and non-overlapping --
then your backplane only has to be (S * 1 Gbps) where S is the
maximum number of simultaneously active servers you need. If S is,
say, 5, then the equipment you need to fill the requirement is
considerably down-scale from a 240 Gbps bacplane.
If the real requirement is indeed that wire speed point to point must
be available but that few such transfers will need to be done
simultaneously, then you could be potentially be working with something
as low end as a single Cisco 6509 [9 slot chassis] with Supervisor
Engine 1A [lowest available speed] and 8 x WS-X6416-GBIC [each offering
16 GBIC ports]. The module backplane interconnect for the 1A is 8 Gbps,
and the maximum forwarding rate of the modules is 32 Gbps [i.e.,
connections on the same module] when using the 1A, with a shared 32
Gbps bus as the backplane in this configuration. [Note: if such a
configuration was satisfactory and you needed at most 6 Gbps, you could
probably do much the same configuration in a single Cisco 4506 switch.]
But if you were not quite as concerned with minimum cost, then you
could use a Cisco 6506 [6 slot chassis] with Supervisor 720 [fastest
available for the 6500 series] and 3 x WS-X6748-SPF [each offering 48
SPF]. The 6748-SPF has a dual 20 Gbps module interconnect; in
conjunction with the Supervisor 720, you can get up to 720 Gbps in some
configurations. If I read the literature correctly, the base
configuration would get you up to about 240 Gbps and you would add a
WS-F6700-DFC3 distributed switching card to go beyond that, up to 384
Gbps per slot. The 6748-SPF supports frames up to 9216 bytes long.
If you were able to go copper instead of fibre, then you could
use a Cisco 6506 with one of the 48-port 10/100/1000 modules:
- WS-X6758-GE-TX for Supervisor 1A, 2, 32, or 720 (32 Gbps shared bus)
- WS-X6548-GE-TX for Supervisor 1A or 2 (1518 bytes/frame max) (8 Gbps
backplane interconnect)
- WS-X6758-TX for Supervisor 720 (9216 bytes/frame max) [speeds as
noted in above paragraph]
An important point to note about the 16, 24, or 48 port gigabit
Cisco interface cards is that they are all oversubscribed relative to
the backplane interconnect [details about exactly how they share the
bandwidth vary with the card]. That makes these cards totally unsuitable
for the situation where you require that all ports -simultaneously-
be capable of running 1 Gbps to arbitrary other ports, but with
some judicious placement of the server connections can make them
just fine for the situation where you need gigabit wire rate for
any one link but do not need very many such connections simultaneously.
[And if so then the Cisco 4506 with anything other the entry-point
Supervisor might be a contender as well; the entry-point Supervisor is,
if I recall correctly, only usable in the 3-slot chassis, the 4503.]
--
I wrote a hack in microcode,
with a goto on each line,
it runs as fast as Superman,
but not quite every time! -- Don Libes et al.