Sign in with
Sign up | Sign in
Your question

120 Hosts Running GigE at Wire Speed Minimum Cost

Last response: in Networking
Share
Anonymous
February 26, 2005 2:07:03 AM

Archived from groups: comp.dcom.lans.ethernet (More info?)

What is the minimum cost solution for running 120 hosts at wire speed on
GigE? I am thinking that something like two used Foundry or Extreme
switches would do this at lowest cost.

--
Will
Anonymous
February 26, 2005 11:09:12 AM

Archived from groups: comp.dcom.lans.ethernet (More info?)

"Will" <DELETE_westes@earthbroadcast.com> wrote:
>What is the minimum cost solution for running 120 hosts at wire speed on
>GigE?

What are you going to _do_ with 50 terabytes per hour?
Anonymous
February 26, 2005 11:46:10 AM

Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <s4t021dld1n0kmj1ojp4pl2t94spv5imbs@4ax.com>,
<William P.N. Smith> wrote:
>"Will" <DELETE_westes@earthbroadcast.com> wrote:
>>What is the minimum cost solution for running 120 hosts at wire speed on
>>GigE?
>
>What are you going to _do_ with 50 terabytes per hour?
>


You first cost may be buying new hosts. A good desktop PC can't fill
a GbE pipe. Or so I'm told.


--

a d y k e s @ p a n i x . c o m

Don't blame me. I voted for Gore.
Anonymous
February 26, 2005 2:14:20 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

adykes@panix.com (Al Dykes) wrote:
>To be fair, the OP didn't say "desktop", he didn;t say anything.

True, we're getting off the original subject. I doubt there's a
machine in existance that'll do "wire speed" and do anything useful
with it, though, so now we're left wondering how far off "wire speed"
we can be and still meet the OP's requirements. My 913 megabits was
regular desktop machines talking thru two D-Link DGS-1005D switches,
but the OP wants 120 machines. If there are no other criteria and
this is a homework assignment, then 60 of those at $60 each will
satisfy the criteria. Of course, in that case you don't even need the
switches, so just cabling the machines together will work... 8*}
Anonymous
February 26, 2005 5:03:31 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <cvpuj2$98u$1@panix5.panix.com>, adykes@panix.com says...
> In article <s4t021dld1n0kmj1ojp4pl2t94spv5imbs@4ax.com>,
> <William P.N. Smith> wrote:
> >"Will" <DELETE_westes@earthbroadcast.com> wrote:
> >>What is the minimum cost solution for running 120 hosts at wire speed on
> >>GigE?
> >
> >What are you going to _do_ with 50 terabytes per hour?
> >
>
>
> You first cost may be buying new hosts. A good desktop PC can't fill
> a GbE pipe. Or so I'm told.

Many can, but not with conventional "off the shelf" applications.
Disk I/O is usually a major factor, unless you're just beaming
data to/from RAM for fun.

--
Randy Howard (2reply remove FOOBAR)
"Making it hard to do stupid things often makes it hard
to do smart ones too." -- Andrew Koenig
Anonymous
February 26, 2005 5:45:38 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

William P.N. Smith wrote:

>>What is the minimum cost solution for running 120 hosts at wire speed on
>>GigE?
>
> What are you going to do with 50 terabytes per hour?

Attempt to keep up with the Windows viruses ;-)
Anonymous
February 26, 2005 7:04:56 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

The end user is building a major animation film. Each of 120 workstations
brings a 100GB file to its local file system, processes it for whatever
reason, and then uploads it back to a common server.

Rather than methodically isolate every bottleneck in the application, I
would like to focus this conversation on one of the many bottlenecks, and
that is the network itself. Personally I think the biggest bottleneck is
disk I/O on the server, but that's a different thread. I just want to make
sure that the network itself doesn't become a bottleneck.

--
Will


<William P.N. Smith> wrote in message
news:s4t021dld1n0kmj1ojp4pl2t94spv5imbs@4ax.com...
> "Will" <DELETE_westes@earthbroadcast.com> wrote:
> >What is the minimum cost solution for running 120 hosts at wire speed on
> >GigE?
>
> What are you going to _do_ with 50 terabytes per hour?
>
Anonymous
February 26, 2005 7:12:35 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

Randy Howard <randyhoward@fooverizonbar.net> wrote:
>> You first cost may be buying new hosts. A good desktop
>> PC can't fill a GbE pipe. Or so I'm told.

> Many can, but not with conventional "off the shelf"
> applications. Disk I/O is usually a major factor,
> unless you're just beaming data to/from RAM for fun.

RAM-to-RAM is a big application for compute clusters.

AFAIK, most desktops cannot get GbE wirespeed, unless
their controller is on something faster than a PCI bus.
The usual limit there is around 300 Mbit/s, mostly
caused by limited PCI burst length and long setup.

-- Robert
Anonymous
February 26, 2005 7:23:48 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <p9idnZri3cUevL3fRVn-2w@giganews.com>,
Will <DELETE_westes@earthbroadcast.com> wrote:
:What is the minimum cost solution for running 120 hosts at wire speed on
:GigE? I am thinking that something like two used Foundry or Extreme
:switches would do this at lowest cost.

Amazing coincidence that the Foundry FastIron II just -happens-
to be rated for exactly 120 wire speed gigabit ports.

Somehow, in my network, we never happen to have nice round multiples
of 12 -- we end up with (e.g.) 79 hosts in a wiring closet,
plus a couple of uplinks.

Odd too that one would have 120 gigabit wirespeed hosts in one place
and not be interested in adding a WAN connection, and not be interested
in redundancy...

======
One must be careful with modular architectures, in that often the
switching speed available between modules is not the same as the
switching speed within the same module.
--
IMT made the sky
Fall.
Anonymous
February 26, 2005 7:23:49 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

FastIron II doesn't support 120 optical ports, so the backplane speed isn't
all that interesting. Sure you could have a tree of switches, but in this
case the 120 hosts happen to all be in racks in the same room, and that's
why I thought an Extreme BlackDiamond or Foundry BigIron might give plenty
of horsepower at neglible cost (assuming you buy used).

--
Will


"Walter Roberson" <roberson@ibd.nrc-cnrc.gc.ca> wrote in message
news:cvq7qk$75t$1@canopus.cc.umanitoba.ca...
> In article <p9idnZri3cUevL3fRVn-2w@giganews.com>,
> Will <DELETE_westes@earthbroadcast.com> wrote:
> :What is the minimum cost solution for running 120 hosts at wire speed on
> :GigE? I am thinking that something like two used Foundry or Extreme
> :switches would do this at lowest cost.
>
> Amazing coincidence that the Foundry FastIron II just -happens-
> to be rated for exactly 120 wire speed gigabit ports.
>
> Somehow, in my network, we never happen to have nice round multiples
> of 12 -- we end up with (e.g.) 79 hosts in a wiring closet,
> plus a couple of uplinks.
>
> Odd too that one would have 120 gigabit wirespeed hosts in one place
> and not be interested in adding a WAN connection, and not be interested
> in redundancy...
>
> ======
> One must be careful with modular architectures, in that often the
> switching speed available between modules is not the same as the
> switching speed within the same module.
> --
> IMT made the sky
> Fall.
Anonymous
February 26, 2005 11:14:45 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

"Will" <DELETE_westes@earthbroadcast.com> wrote:
>The end user is building a major animation film. Each of 120 workstations
>brings a 100GB file to its local file system, processes it for whatever
>reason, and then uploads it back to a common server.

So you need a server with a 120 gigabit NIC, and a server port on your
switch of the same speed?

Again, if 90% is good enough, then SOHO unmanaged switches are good
enough. If the network is faster than your disks, why spend any brain
cycles on how many nines you can get out of your network?

You are talking millions of dollars worth of hardware, why ask this
kind of question on Usenet? [FWIW, the upload-process-download thing
sounds really inefficient...]
Anonymous
February 26, 2005 11:14:46 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

You are assuming that there is one file server. That would be the worst
possible design, right?

Regarding USENET, you are assuming that this is the only input the design
process? You are assuming that no one on USENET could possibly have one
even microscopically significant idea that might improve any aspect of the
design? Pretty pessimistic assessment of the medium in which you are
participating.... Considering that the cost is next to zero, if you get
nothing you have lost nothing. And if you get even one good idea, you got
the idea at an excellent cost-benefit ratio. The fact that others now
benefit from the exchange, now and in the future, creates benefits for the
larger audience with access to USENET.

Your point that the workstations have local disks that are slower than the
network is a point well-taken. But the disks are capable of better than
10/100 100BaseT speeds, so gigE just happens to be the next step up that
bypasses that particular bottleneck. And these days gigE is cheap.

--
Will


<William P.N. Smith> wrote in message
news:r67221do9o7pe3nj24oina3om2ssfimopb@4ax.com...
> So you need a server with a 120 gigabit NIC, and a server port on your
> switch of the same speed?
>
> Again, if 90% is good enough, then SOHO unmanaged switches are good
> enough. If the network is faster than your disks, why spend any brain
> cycles on how many nines you can get out of your network?
>
> You are talking millions of dollars worth of hardware, why ask this
> kind of question on Usenet? [FWIW, the upload-process-download thing
> sounds really inefficient...]
>
Anonymous
February 27, 2005 6:47:24 AM

Archived from groups: comp.dcom.lans.ethernet (More info?)

Will wrote:
> The end user is building a major animation film. Each of 120
> workstations brings a 100GB file to its local file system, processes
> it for whatever reason, and then uploads it back to a common server.
>
> Rather than methodically isolate every bottleneck in the application,
> I would like to focus this conversation on one of the many
> bottlenecks, and that is the network itself. Personally I think
> the biggest bottleneck is disk I/O on the server, but that's a
> different thread. I just want to make sure that the network itself
> doesn't become a bottleneck.

In that case, Force10 and Extreme would be worth a look. But if you're
comfortable with Cisco hardware, you may want to look there as well.
From *pure* performance standpoint, Cisco may come in 2nd or 3rd, but
they have a large support infrastructure. But of course, they won't be
cheap.

--

hsb


"Somehow I imagined this experience would be more rewarding" Calvin
**************************ROT13 MY ADDRESS*************************
Due to the volume of email that I receive, I may not not be able to
reply to emails sent to my account. Please post a followup instead.
********************************************************************
Anonymous
February 27, 2005 7:45:38 AM

Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <R46dneq6oYqejLzfRVn-tw@giganews.com>,
Will <DELETE_westes@earthbroadcast.com> wrote:
:FastIron II doesn't support 120 optical ports,

Your posting asked for the 'minimum cost solution'. Optical is not
going to be the minimum cost solution if the hosts are within 100m
of the server.

If you have constraints such as "optical" then you should state
them upfront -- and even then you should be specific about whether,
e.g., you are looking for 100 FX connectors or GBIC or SFP.


:Sure you could have a tree of switches, but in this
:case the 120 hosts happen to all be in racks in the same room,

So you don't need 120 ports, you need 120 ports plus 1 per server
plus enough for interconnects plus some number more for connections
to the Internet (or to some other equipment used to create copies
of the data to deliver it to customers); possibly plus more for
backup hosts.
--
We don't need no side effect-ing
We don't need no scope control
No global variables for execution
Hey! Did you leave those args alone? -- decvax!utzoo!utcsrgv!roderick
Anonymous
February 27, 2005 9:43:58 AM

Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <qLmdnSIR05P2v7zfRVn-uw@giganews.com>,
Will <DELETE_westes@earthbroadcast.com> wrote:
:Clearly the disk and network I/O bottlenecks at the file servers are big.
:But that's another thread.

Excuse me, but that *isn't* "another thread". The process you
describe involves negligable communications between the hosts. This
makes a big difference in the choice of equipment.

If your setup is such that there could be N simultaneous connections
to M servers, and N > M and you are asking us for a design in which
"the network itself is not a bottleneck", then you have an implicit
requirement that the server port must be able to operate at
somewhere between (ceiling(N/M) * 1 Gbps) and (N * 1 Gpbs), depending
on the traffic patterns. We have to know what that peak rate is
in order to advise you on the correct switch. Current off-the-shelf
technologies get you 1 Gbps interfaces on a wide range of
devices, 10 Gbps XENPAK interfaces on a much lesser range of devices;
2 Gbps interfaces are also available in some models -- but if that's
your spec then we need to know so that we rule out devices that
can't handle that load.

But perhaps you are planning to get past 1 Gbps by using IEEE 802.3ad
linking of multiple gigabit ports on the server. If that's the case,
then we need to know that so that we know to constrain to 802.3ad
compliant devices. For example, for several years Cisco has had
it's EtherChannel / GigaEtherChannel technology out that allowed
multiple channels to be bonded together, but that technology predates
the 802.3ad standard. Cisco supports 802.3ad in modern IOS versions,
but the cost of upgrading IOS versions on used devices with the
Ompph! you need is very high -- high enough that it can end up being
less expensive to buy -new- switches than "relicensing" and upgrading
software on used ones. Whereas if you don't need 802.3ad, then
used Cisco equipment could potentially be "relicensed" without
software upgrade.


:The only thing I'm concerned about in the
:current thread is how to cheaply guarantee that the network itself is not a
:bottleneck for the servers processing information that they bring down from
:the file servers.

If your server interfaces are going to run at only 1 Gbps, then
in order to "guarantee" that the network is not the bottleneck
in the circumstance that the devices really will run at "wire speed"
you are going to need 120 servers -- an increase which is going to
seriously skew the switch requirements.


The alternative to all of this, if you are content with your
users sharing 1 Gbps to each server, is to recognize that you do
not, in such a case, need to run all the ports at 1 Gbps wire speed
*simultaneously*. That makes a substantial difference in your choices!!

Your initial stated requirement of 120 hosts at gigabit wire speed
implied to us that the switches had to have an (M * 2 Gbps) switching
fabric per module, where M is the number of ports per switching
module, *and* that the backplane fabric speed had to be at least
240 Gbps (in order to handle the worst-case scenario in which
every point is communicating wire rate full duplex with a port on
a different module.) That's a tough requirement to meet for
the backplane -- a requirement that is very much incompatable with
"minimum cost".

If, though, you requirement is really just that one device at a time
must be able to run gigabit wire rate unidirecitonal with one of
the servers -- that the link must have full gigabit available
upon demand but the demands will be infrequent and non-overlapping --
then your backplane only has to be (S * 1 Gbps) where S is the
maximum number of simultaneously active servers you need. If S is,
say, 5, then the equipment you need to fill the requirement is
considerably down-scale from a 240 Gbps bacplane.

If the real requirement is indeed that wire speed point to point must
be available but that few such transfers will need to be done
simultaneously, then you could be potentially be working with something
as low end as a single Cisco 6509 [9 slot chassis] with Supervisor
Engine 1A [lowest available speed] and 8 x WS-X6416-GBIC [each offering
16 GBIC ports]. The module backplane interconnect for the 1A is 8 Gbps,
and the maximum forwarding rate of the modules is 32 Gbps [i.e.,
connections on the same module] when using the 1A, with a shared 32
Gbps bus as the backplane in this configuration. [Note: if such a
configuration was satisfactory and you needed at most 6 Gbps, you could
probably do much the same configuration in a single Cisco 4506 switch.]

But if you were not quite as concerned with minimum cost, then you
could use a Cisco 6506 [6 slot chassis] with Supervisor 720 [fastest
available for the 6500 series] and 3 x WS-X6748-SPF [each offering 48
SPF]. The 6748-SPF has a dual 20 Gbps module interconnect; in
conjunction with the Supervisor 720, you can get up to 720 Gbps in some
configurations. If I read the literature correctly, the base
configuration would get you up to about 240 Gbps and you would add a
WS-F6700-DFC3 distributed switching card to go beyond that, up to 384
Gbps per slot. The 6748-SPF supports frames up to 9216 bytes long.

If you were able to go copper instead of fibre, then you could
use a Cisco 6506 with one of the 48-port 10/100/1000 modules:
- WS-X6758-GE-TX for Supervisor 1A, 2, 32, or 720 (32 Gbps shared bus)
- WS-X6548-GE-TX for Supervisor 1A or 2 (1518 bytes/frame max) (8 Gbps
backplane interconnect)
- WS-X6758-TX for Supervisor 720 (9216 bytes/frame max) [speeds as
noted in above paragraph]

An important point to note about the 16, 24, or 48 port gigabit
Cisco interface cards is that they are all oversubscribed relative to
the backplane interconnect [details about exactly how they share the
bandwidth vary with the card]. That makes these cards totally unsuitable
for the situation where you require that all ports -simultaneously-
be capable of running 1 Gbps to arbitrary other ports, but with
some judicious placement of the server connections can make them
just fine for the situation where you need gigabit wire rate for
any one link but do not need very many such connections simultaneously.
[And if so then the Cisco 4506 with anything other the entry-point
Supervisor might be a contender as well; the entry-point Supervisor is,
if I recall correctly, only usable in the 3-slot chassis, the 4503.]
--
I wrote a hack in microcode,
with a goto on each line,
it runs as fast as Superman,
but not quite every time! -- Don Libes et al.
Anonymous
February 27, 2005 6:21:16 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

"Will" <DELETE_westes@earthbroadcast.com> top-posted:
>You are assuming that there is one file server. That would be the worst
>possible design, right?

Well, it's not inconsistent with the design details you've given us.
8*)

>You are assuming that no one on USENET could possibly have one
>even microscopically significant idea that might improve any aspect of the
>design?

Not at all, there are some really clever people here, including those
who helped design Ethernet. I'm more thinking along the lines of "Ask
not of Usenet, for it will tell you Yes, and No, and everything in
between."

>And if you get even one good idea, you got
>the idea at an excellent cost-benefit ratio.

True, if your time is worth nothing. 8*)

>Your point that the workstations have local disks that are slower than the
>network is a point well-taken. But the disks are capable of better than
>10/100 100BaseT speeds, so gigE just happens to be the next step up that
>bypasses that particular bottleneck. And these days gigE is cheap.

Sure, but my point is that _any_ GigE hardware will meet your
criteria, and everytime I hear someone ask for "wire-speed" I know at
least that they don't understand their problem. Present company
excluded, of course.
Anonymous
February 28, 2005 12:21:02 AM

Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <p0a421d23c2e97hfi1n2sh8142oe9v9ck3@4ax.com>,
<William P.N. Smith> wrote:
:Sure, but my point is that _any_ GigE hardware will meet your
:criteria,

Not if you oversubscribe a Cisco 4500, 5000, 6000, or 6500...

--
Contents: 100% recycled post-consumer statements.
Anonymous
February 28, 2005 2:17:29 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

Will wrote:
> Personally I think the biggest bottleneck is
> disk I/O on the server,

Personally, I think you are right about that! :-)
Anonymous
February 28, 2005 5:07:31 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <Td1Ud.57537$iC4.28684@newssvr30.news.prodigy.com>,
redelm@ev1.net.invalid says...
> Randy Howard <randyhoward@fooverizonbar.net> wrote:
> >> You first cost may be buying new hosts. A good desktop
> >> PC can't fill a GbE pipe. Or so I'm told.
>
> > Many can, but not with conventional "off the shelf"
> > applications. Disk I/O is usually a major factor,
> > unless you're just beaming data to/from RAM for fun.
>
> RAM-to-RAM is a big application for compute clusters.
>
> AFAIK, most desktops cannot get GbE wirespeed, unless
> their controller is on something faster than a PCI bus.

Some have Gig down, with it wired in as PCI-X. What
percentage of desktops have good gigE implementations I
can't answer.

> The usual limit there is around 300 Mbit/s, mostly
> caused by limited PCI burst length and long setup.

I've seen in the neighorhood of 1800 Mbit/s (FDX) on
a variety of PCI-X gigE implementations. The trick
is to open multiple connections and use overlapped/
threading techniques to keep the pipe full.

When you do this on a gig switch with a series of
"pairs", or with a fanout test with multiple clients
all going into one fast server on one of the ports,
each beaming data back and forth wide open, you can
watch the switch melt down in a lot of cases.

This is of course the fallacy of believing snake oil
like Tolly reports. I've seen 8-port gig-E switches
that have passed Tolly testing start dropping link
randomly in minutes under this type of test. I've also
seen cheap $89 5-port gig switches run the same test,
at slightly better throughput for a week solid without
hiccup.

Most of the low-cost high port-count switches (24,
48) will not take kindly to you trying to run all
the ports wide open simultaneously. This has nothing
to do with the presence or absence of a published claim
to be a non-blocking switch.

Further, those that are managed switches will have
their management interfaces cease being responsive
at all under this type of load.

You can achieve the same thing with multicast load
on IGMP switches, they'll work for a brief period
sending the stream only to subscribed ports, then
suddenly start flooding the traffic to all the ports.

Apparently there isn't any money in vendors publishing
REAL stress tests on switches, because far too many
of them would fail.

--
Randy Howard (2reply remove FOOBAR)
"Making it hard to do stupid things often makes it hard
to do smart ones too." -- Andrew Koenig
Anonymous
February 28, 2005 5:07:32 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

Randy Howard wrote:

> You can achieve the same thing with multicast load
> on IGMP switches, they'll work for a brief period
> sending the stream only to subscribed ports, then
> suddenly start flooding the traffic to all the ports.

I can vouch for GSM7312's (and gsm7324's) doing this. Their layer 2
stuff as well - gsm712, fsm726s, etc. Do NOT attempt to push out a GHOST
image over multicast from a gig host if you use these - the switch WILL
melt down.
Anonymous
February 28, 2005 5:07:33 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <1126gtv5bra9lf3@news.supernews.com>,
T. Sean Weintz <strap@hanh-ct.org> wrote:
>Randy Howard wrote:
>
>> You can achieve the same thing with multicast load
>> on IGMP switches, they'll work for a brief period
>> sending the stream only to subscribed ports, then
>> suddenly start flooding the traffic to all the ports.
>
>I can vouch for GSM7312's (and gsm7324's) doing this. Their layer 2
>stuff as well - gsm712, fsm726s, etc. Do NOT attempt to push out a GHOST
>image over multicast from a gig host if you use these - the switch WILL
>melt down.


What's the symptom ? Lots of dropped packets ? Lockup ?

(I'm assuming the references to smoke in this thread are
metaphorical.)

--

a d y k e s @ p a n i x . c o m

Don't blame me. I voted for Gore.
Anonymous
February 28, 2005 5:13:58 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <nl7121hgb3e8jcp6n0n3chhfoij8f7q9tf@4ax.com>, William P.N.
Smith says...
> adykes@panix.com (Al Dykes) wrote:
> >To be fair, the OP didn't say "desktop", he didn;t say anything.
>
> True, we're getting off the original subject. I doubt there's a
> machine in existance that'll do "wire speed" and do anything useful
> with it, though,

If you define "do something useful with it" as processing it all and
the writing it all to disk, then if it won't work right now, it's not
far off. There are several varieties of storage controller/drive
combos that can achieve r/w throughput in excess of 125MB/s. They're
not cheap, and they're not on desktops typically, but it can be done.

Depending upon CPU horsepower and the quality of the network driver,
it can be done. There are some systems that can handle "wire speed"
both directions, I.e. FDX (2Gbps).

> so now we're left wondering how far off "wire speed" we can be and
> still meet the OP's requirements.

Odds are the link is faster than any app likely to be used already.
There are some specific gig lan drivers which generate insanely
high CPU loads for a given throughput, so it's not generically
answerable.

> My 913 megabits was regular desktop machines talking thru two
> D-Link DGS-1005D switches, but the OP wants 120 machines.

IOW, he's serious about it.

> If there are no other criteria and this is a homework assignment,
> then 60 of those at $60 each will satisfy the criteria.

Unlikely. When you start daisy chaining switches, the numbers
don't stand up.

> Of course, in that case you don't even need the switches, so
> just cabling the machines together will work... 8*}

How do you "just cable 120 machines together" without switches ???

--
Randy Howard (2reply remove FOOBAR)
"Making it hard to do stupid things often makes it hard
to do smart ones too." -- Andrew Koenig
Anonymous
February 28, 2005 5:13:59 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

Randy Howard wrote:

> In article <nl7121hgb3e8jcp6n0n3chhfoij8f7q9tf@4ax.com>, William P.N.
> Smith says...
>> adykes@panix.com (Al Dykes) wrote:
>> >To be fair, the OP didn't say "desktop", he didn;t say anything.
>>
>> True, we're getting off the original subject. I doubt there's a
>> machine in existance that'll do "wire speed" and do anything useful
>> with it, though,
>
> If you define "do something useful with it" as processing it all and
> the writing it all to disk, then if it won't work right now, it's not
> far off. There are several varieties of storage controller/drive
> combos that can achieve r/w throughput in excess of 125MB/s. They're
> not cheap, and they're not on desktops typically, but it can be done.
>
> Depending upon CPU horsepower and the quality of the network driver,
> it can be done. There are some systems that can handle "wire speed"
> both directions, I.e. FDX (2Gbps).
>
>> so now we're left wondering how far off "wire speed" we can be and
>> still meet the OP's requirements.
>
> Odds are the link is faster than any app likely to be used already.
> There are some specific gig lan drivers which generate insanely
> high CPU loads for a given throughput, so it's not generically
> answerable.
>
>> My 913 megabits was regular desktop machines talking thru two
>> D-Link DGS-1005D switches, but the OP wants 120 machines.
>
> IOW, he's serious about it.
>
>> If there are no other criteria and this is a homework assignment,
>> then 60 of those at $60 each will satisfy the criteria.
>
> Unlikely. When you start daisy chaining switches, the numbers
> don't stand up.
>
>> Of course, in that case you don't even need the switches, so
>> just cabling the machines together will work... 8*}
>
> How do you "just cable 120 machines together" without switches ???

Two NICs each and configure each as a bridge <eg>.

>

--
--John
to email, dial "usenet" and validate
(was jclarke at eye bee em dot net)
Anonymous
February 28, 2005 5:20:39 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <qLmdnSIR05P2v7zfRVn-uw@giganews.com>,
DELETE_westes@earthbroadcast.com says...
> Clearly the disk and network I/O bottlenecks at the file servers are big.
> But that's another thread.

Right, start with dedicated storage controllers and lots and lots of
spindles. HP SmartArray hardware is a good place to start looking.

> The only thing I'm concerned about in the current thread is how to
> cheaply guarantee that the network itself is not a bottleneck for the
> servers processing information that they bring down from the file servers.

Alacritech seems to have the best CPU load per (whatever unit of
transfer you like) of the current Gigabit ethernet adapters. The
last I looked, they only supported Windows platforms though, which
may or may not be an issue for you. That will help keep the network
I/O from getting in the way of system work being done.

You are probably correct that a large, high-end non-blocking switch
is what you need *IF* everybody is sending and receiving in parallel
all the time. If the workstations are randomly hitting the server,
at intervals, it might not be such a problem. Odds are in such
a scenario that the network will be far less than fully utilized
while the disk controllers (on both the server and workstation
sides) will be firewalled fairly often.

Why not build a small 30-node test bed, using a couple of 16-port
Netgear gig-E switches (about $800 in hardware) and running some
simulated load testing to see where the bottleneck is before buying
an expensive switch only to find out that you should be spending
money on storage hardware instead?

--
Randy Howard (2reply remove FOOBAR)
"Making it hard to do stupid things often makes it hard
to do smart ones too." -- Andrew Koenig
Anonymous
February 28, 2005 7:10:12 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <cvvdgl0dr0@news2.newsguy.com>,
jclarke.usenet@snet.net.invalid says...
> Randy Howard wrote:

> >> Of course, in that case you don't even need the switches, so
> >> just cabling the machines together will work... 8*}
> >
> > How do you "just cable 120 machines together" without switches ???
>
> Two NICs each and configure each as a bridge <eg>.

Ugh.

--
Randy Howard (2reply remove FOOBAR)
"Making it hard to do stupid things often makes it hard
to do smart ones too." -- Andrew Koenig
Anonymous
February 28, 2005 8:04:19 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

Thank you for your excellent post, which included enough detail on the Cisco
product to tell me that (as usual) the truth is complex and we will probably
need to do the grunt work to detail out the requirements. While I
appreciate the need to do that, you will have to trust me that there are
many organizations who think they can just spend their way out of any design
problem, and they end up just not doing design.

I remember one situation where a company bought a mainframe upgrade for $1M
to speed up a key application. The application continued to be slow. I
did performance tuning to discover the bottleneck. What I found was that
their database vendor had a blocking queue on reports against the database.
Only one user at a time was allowed to run reports!! None of this was
documented. It was something they only admitted when we confronted them
with the data. No amount of additional CPU would have changed the
processing time.

It's difficult to believe that managers who control money in large companies
would spend $1M (or more) rather than spend $10K to just have someone think
and research to define what problem they really need to solve. But I see
exactly that all of the time. And I have given up on changing the world.
The world is largely run by people who act on gut instinct, and many of
those people get extremely offended when you point out that someone needs to
write requirements. They usually say something dismissive like "Well
that's what we have done!" or "We have excellent people and we know our
problem, now are you going to help me solve it?" It does no good to point
out that the chicken scratches on some chalkboard aren't requirements.
Some organizations fundamentally don't understand how to write requirements,
or how to analyze requirements.

If we could get a very large switch that really could do wire speed on 120
hosts at wire speed simultaneously cheaply, then you don't need to work out
the details of what the actual throughput would be on individual
workstations for purposes of the network design. To the extent that the
network scales to X usage on all ports simultaneously, you are covered by
network capacity if you actually use 25% of X. Intentional overdesign is
not a bad thing if it does not substantially change cost. When I see a 10
slot Extreme BlackDiamond switch (which claims 384 Gbps backplane speed)
selling for next to nothing, you at least wonder if overdesign might not
cost much. After reading your post, the first thought I have is that the
Extreme product probably has a dozen board-level bottlenecks that they do
not disclose. I would be interested in any feedback from anyone who has
tried to stress that product, particularly using its layer 3 capabilities.

As far as file servers go, the thought was that they would end up using
either PCI Express bus servers with quad gigE cards, or just increase the
number of servers using multiple single port cards. The number of server
ports they end up needing obviously does in turn affect the number of
network ports you need. And if you need to now cross connect two large
switches, how you do that without buying expensive new 10 Gbps technology
becomes a problem.

--
Will


"Walter Roberson" <roberson@ibd.nrc-cnrc.gc.ca> wrote in message
news:cvrq7e$bkj$1@canopus.cc.umanitoba.ca...
> If your setup is such that there could be N simultaneous connections
> to M servers, and N > M and you are asking us for a design in which
> "the network itself is not a bottleneck", then you have an implicit
> requirement that the server port must be able to operate at
> somewhere between (ceiling(N/M) * 1 Gbps) and (N * 1 Gpbs), depending
> on the traffic patterns. We have to know what that peak rate is
> in order to advise you on the correct switch. Current off-the-shelf
> technologies get you 1 Gbps interfaces on a wide range of
> devices, 10 Gbps XENPAK interfaces on a much lesser range of devices;
> 2 Gbps interfaces are also available in some models -- but if that's
> your spec then we need to know so that we rule out devices that
> can't handle that load.
Anonymous
February 28, 2005 8:21:54 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <1126gtv5bra9lf3@news.supernews.com>, strap@hanh-ct.org
says...
> Randy Howard wrote:
>
> > You can achieve the same thing with multicast load
> > on IGMP switches, they'll work for a brief period
> > sending the stream only to subscribed ports, then
> > suddenly start flooding the traffic to all the ports.
>
> I can vouch for GSM7312's (and gsm7324's) doing this. Their layer 2
> stuff as well - gsm712, fsm726s, etc. Do NOT attempt to push out a GHOST
> image over multicast from a gig host if you use these - the switch WILL
> melt down.

MOST IGMP switches will do this. It's very hard to find one that
will not.

--
Randy Howard (2reply remove FOOBAR)
"Making it hard to do stupid things often makes it hard
to do smart ones too." -- Andrew Koenig
Anonymous
February 28, 2005 9:11:10 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

Randy Howard <randyhoward@FOOverizonBAR.net> wrote:
>How do you "just cable 120 machines together" without switches ???

Machine 1 to machine 2 with a crossover cable(*)
[...]
machine 119 to machine 120 with a crossover cable.

Agree ahead of time that it's not a useful scenario, though they will
get "wire-speed". 8*)

(*) IIRC, a 1000BaseT crossover is a regular straight-thru cable, but
that's a detail.
Anonymous
February 28, 2005 9:20:34 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

Begin <MPG.1c8d0f2cb1d7b88f98a100@news.verizon.net>
On 2005-02-28, Randy Howard <randyhoward@FOOverizonBAR.net> wrote:
> In article <1126gtv5bra9lf3@news.supernews.com>, strap@hanh-ct.org
[give a switch a decent load and...]
>> [...] - the switch WILL
>> melt down.
>
> MOST IGMP switches will do this. It's very hard to find one that
> will not.

Ok, now I'm intrigued. Which ones are known to _not_ do this? I can make
a couple of guesses (cisco, extreme, hp, alphabetical order), but I
haven't actually tried or anything. Thoughts? Experiences?


--
j p d (at) d s b (dot) t u d e l f t (dot) n l .
Anonymous
February 28, 2005 9:20:35 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

jpd wrote:

>
> Ok, now I'm intrigued. Which ones are known to _not_ do this? I can make
> a couple of guesses (cisco, extreme, hp, alphabetical order), but I
> haven't actually tried or anything. Thoughts? Experiences?
>
>

I would be annoyed if a Nortel Passport 8600 did this.
Anonymous
March 1, 2005 4:12:21 AM

Archived from groups: comp.dcom.lans.ethernet (More info?)

> Most of the low-cost high port-count switches (24,
> 48) will not take kindly to you trying to run all
> the ports wide open simultaneously. This has nothing
> to do with the presence or absence of a published claim
> to be a non-blocking switch.

What do you call "low-cost" or better yet what's the cost floor at which
you'd not expect to melt down?
Anonymous
March 2, 2005 1:00:49 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <38h5niF5n56k2U1@individual.net>,
read_the_sig@do.not.spam.it.invalid says...
> Begin <MPG.1c8d0f2cb1d7b88f98a100@news.verizon.net>
> On 2005-02-28, Randy Howard <randyhoward@FOOverizonBAR.net> wrote:
> > In article <1126gtv5bra9lf3@news.supernews.com>, strap@hanh-ct.org
> [give a switch a decent load and...]
> >> [...] - the switch WILL
> >> melt down.
> >
> > MOST IGMP switches will do this. It's very hard to find one that
> > will not.
>
> Ok, now I'm intrigued. Which ones are known to _not_ do this? I can make
> a couple of guesses (cisco, extreme, hp, alphabetical order), but I
> haven't actually tried or anything. Thoughts? Experiences?

It's mostly the "low cost" ones that smaller companies buy because
they afraid of all the zeros in the price of the serious vendors.

By and large, you can tell just by looking, because they all have the
same chassis, just different colored paint and logos attached. The
bulk of these switches (and increasingly some of the name brand
switches) are all implemented and built by a a few Taiwan companies.

Once you've found one with the problem, you can find several more
just by cracking the cases open and looking at the internals when
the chassis looks the same.

In general, anything made by Accton (and OEM'd to a bunch of vendors)
is not worth the time it takes to plug in the power cord. You just
never know what will happen with their stuff. If you've ever had
access to their stuff before it is gone RTS, you'll understand,
otherwise you wouldn't believe it possible. Even the stuff that
does get shipped is often riddled with bugs, but in areas that they
hope the majority of their customers will never see in practice.

Some of the stuff made by Delta is good, some is not. In general,
they're a notch above Accton most of the time.

Don't be fooled into thinking that the name brands you know of
build their own gear. Almost none of them do. Many don't even
stick with a consistent vendor from model to the next.

I've even seen some of the supposedly "good" gear, like HP Procurve
switches do flaky stuff, like not even handling WOL packets
correctly. In general, every switch (not just vendor) is different
and you have to look at each one individually to be sure.

The other aspect is that for the small businesses just using them
as modern "paper cup and string" links between PCs and to browse
the web, they usually will never notice the issues we're talking
about, so there is no real reason for them to buy the higher end
geear.


--
Randy Howard (2reply remove FOOBAR)
"Making it hard to do stupid things often makes it hard
to do smart ones too." -- Andrew Koenig
Anonymous
March 2, 2005 1:08:10 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <-LadnU8WJ_RamrnfRVn-rg@portbridge.com>, news02
@raleighthings.com says...
> > Most of the low-cost high port-count switches (24,
> > 48) will not take kindly to you trying to run all
> > the ports wide open simultaneously. This has nothing
> > to do with the presence or absence of a published claim
> > to be a non-blocking switch.
>
> What do you call "low-cost" or better yet what's the cost floor at which
> you'd not expect to melt down?

I'm not sure you can simplify it that far. I've seen the same
EXACT switch sold from 2 or 3 different vendors, with price
differences greater than the sales price of the lowest one.

You can't go by just brand name, as the "brands" are really
nothing more than "chassis colors" in a lot of cases, with
the guts all coming from the same place.

The good news is if you buy one of the big, serious switches,
then you know you can at least get your salesman's attention
if/when you run into trouble. Unfortunately, there is often
a HUGE price difference between their stuff and 10 or 15
competing products at a much lower price point. Some of those
lower cost products are decent, some are not. You have to
look at each one.

So, what to do? If you plan on buying a dozen 24 port switches,
pick one you think does what you need at a decent price, buy
ONE of them from someplace with a reasonable return policy, and
put it through a trial by fire for a few weeks with everything
you can think to throw at it. If you do multicasting, then
send as much of that through it as you can. If you think you'll
be running the switch wide open, then do that. I've seen
switches reboot due to firmware bugs when performance counters
overflow internally. So send it long streams of data, for days
or weeks and make sure it doesn't roll over. If it makes it,
order 11 more. If not, return it and start over.

You cannot rely on third party testing and "certifications" to
guarantee they work. Many of those are technically only as
challenging as the vendor writing the check to the test house
so they can be listed on a website with some gold star next
to the model number. It's a joke, but not a funny one.

--
Randy Howard (2reply remove FOOBAR)
"Making it hard to do stupid things often makes it hard
to do smart ones too." -- Andrew Koenig
Anonymous
March 3, 2005 7:48:24 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

> Just a comment but Consumer Reports as a matter of policy buys everything
> they test through the normal purchasing channels, they don't rely on the
> manufacturers sending them samples that they believe might be altered to
> improve the test results.

Which creates a separate problem in that they really don't get a
representative sample of the goods they buy. I'm sure they've discussed
this and decided this is best for them but if they get the 1 in 1000
that's way worse or way better than average, the review is useless.

And if they buy rev 2 because where they bought is the last part of the
country to get the rev 3 units, it's worse than useless.

CU does a good job if you know how to use the results. But they are far
from perfect.
Anonymous
March 3, 2005 7:52:07 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

> So, what to do? If you plan on buying a dozen 24 port switches,
> pick one you think does what you need at a decent price, buy
> ONE of them from someplace with a reasonable return policy, and
> put it through a trial by fire for a few weeks with everything
> you can think to throw at it. If you do multicasting, then
> send as much of that through it as you can. If you think you'll
> be running the switch wide open, then do that. I've seen
> switches reboot due to firmware bugs when performance counters
> overflow internally. So send it long streams of data, for days
> or weeks and make sure it doesn't roll over. If it makes it,
> order 11 more. If not, return it and start over.

HP is having a 30 day return policy for up to 2 of some models of their
switches just now. I need 4 for one office but am buying 2 just to run
them through their paces before I jump in with both feet.

I'm looking at about $10k total. I can just imagine the sale FUD if I
was looking to spend $250K or $2.5M. Almost like the old mainframe days
when IBM and others (some still do) made you sign licensing agreements
where you would never make public any benchmarks you might do.
Anonymous
March 4, 2005 1:00:11 AM

Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <ZJydnU463Jm1G7rfRVn-qQ@portbridge.com>,
David Ross <news02@raleighthings.com> wrote:
:> Just a comment but Consumer Reports as a matter of policy buys everything
:> they test through the normal purchasing channels,

:And if they buy rev 2 because where they bought is the last part of the
:country to get the rev 3 units, it's worse than useless.

CU deliberately spreads their buying across the country.
--
Will you ask your master if he wants to join my court at Camelot?!
Anonymous
March 4, 2005 1:00:12 AM

Archived from groups: comp.dcom.lans.ethernet (More info?)

Walter Roberson wrote:

> In article <ZJydnU463Jm1G7rfRVn-qQ@portbridge.com>,
> David Ross <news02@raleighthings.com> wrote:
> :> Just a comment but Consumer Reports as a matter of policy buys everything
> :> they test through the normal purchasing channels,
>
> :And if they buy rev 2 because where they bought is the last part of the
> :country to get the rev 3 units, it's worse than useless.
>
> CU deliberately spreads their buying across the country.

But their dependence on single point per model or even manufacturer
means a good shopper treats CU the same. As a single point of data in
the buying experience.

And let's not even get into the "political bias" they bring into their
"unbiased" testing. What you test FOR and how you weight the tests is a
major bias they don't discuss.
Anonymous
March 4, 2005 1:01:21 AM

Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <XfOdndLTNv6RGrrfRVn-og@portbridge.com>,
David Ross <news02@raleighthings.com> wrote:
:Almost like the old mainframe days
:when IBM and others (some still do) made you sign licensing agreements
:where you would never make public any benchmarks you might do.

Microsoft's .NET EULA has a clause to that effect.
--
Sub-millibarn resolution bio-hyperdimensional plasmatic space
polyimaging is just around the corner. -- Corry Lee Smith
Anonymous
March 4, 2005 1:01:22 AM

Archived from groups: comp.dcom.lans.ethernet (More info?)

> :Almost like the old mainframe days
> :when IBM and others (some still do) made you sign licensing agreements
> :where you would never make public any benchmarks you might do.
>
> Microsoft's .NET EULA has a clause to that effect.

For those old folks around here, remember when the weekly rag,
(ComputerWolrd?), had an ad every week showing SyncSort wiping out IBM's
sort. After a few years, IBM posted some ads showing how they had bested
SyncSort. Then it came out they had rigged the results. Not just a tad,
but by a huge disparity in how the tests were run. Several folks at IBM
got spanked over that one.

For those who don't know these references, it's from the late 70s, early
80s.
Anonymous
March 4, 2005 1:42:12 AM

Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <XfOdndLTNv6RGrrfRVn-og@portbridge.com>, news02
@raleighthings.com says...
> > So, what to do? If you plan on buying a dozen 24 port switches,
> > pick one you think does what you need at a decent price, buy
> > ONE of them from someplace with a reasonable return policy, and
> > put it through a trial by fire for a few weeks with everything
> > you can think to throw at it. If you do multicasting, then
> > send as much of that through it as you can. If you think you'll
> > be running the switch wide open, then do that. I've seen
> > switches reboot due to firmware bugs when performance counters
> > overflow internally. So send it long streams of data, for days
> > or weeks and make sure it doesn't roll over. If it makes it,
> > order 11 more. If not, return it and start over.
>
> HP is having a 30 day return policy for up to 2 of some models of their
> switches just now. I need 4 for one office but am buying 2 just to run
> them through their paces before I jump in with both feet.

An excellent plan. Be sure to run all ports in parallel, not just
test a few of them at a time. Obviously, for this to work best, you
need stress and performance measurement software than can achieve
wirespeed (and hardware fast enough to get there). To verify this,
take two fast systems. Put them on any pair of ports by themselves.
If you cannot achieve 125MBytes/s one way (ram to ram) between
the machines, or something in the neighborhood of 200-220 MBytes/s
FDX (ram to ram) between those two systems, then you don't have the
right test software, the right hardware, or both. (BTW, you should
be able to get the same exact numbers just using a direct connection
between the two systems if you suspect the switch is a problem).

If you get that working, then try these types of tests:
[ I am going to pretend like they are only 4 port switches just to
make this easier to explain. Extending it to 16, 24, 48, etc. is
trivial ]

Switch
P1 P2 P3 P4

S C C C (S=server, C= client)

BTW, when I say client, I mean in the TCP sense, not in the "slow
desktop" sense.

Put a server on P1, and have p2, p3 and p4 all send and receive (in
parallel if possible) simultaneously to P1 and make sure it doesn't
start clamping down as clients are added, all being funnelled into 1
port. For larger switches, they are often split logically
internally, with two "half switches" that communicate through a
higher speed interconnect. Example, a 24-port switch may be to 12-
port switches ganged together. Usually this is done logically, so
that 1-12 are on one "side", and 13-24 are on the other. Make sure
that you measure throughput with all clients on the same "half" of
the switch, and on the opposite "half" and you get equivalent
results. You also may want to have multiple servers, say one or two
on each half, with clients funneling into them from the same, or
opposite sides of the switch and verify that performance doesn't
drop off.

Next, set up your switch like this:

Switch
P1 P2 P3 P4
Ca Cb Cc Cd

Run traffic (FDX if possible) between pairs of ports. Ca <-> Cb,
Cc <-> Cd, etc. By doing this, if the switch is truly non-blocking,
you should not see the individual pairs slow down, even with all of
them running wide open.

If you care about multicast, then you want to try this as well, were
you generate on one port, and subscribe from some small subset of
the other ports. Verify (if it support IGMP) that *only* the ports
that are subscribed are getting the traffic. Do this by having the
non-subscribed ports sit idle so you can easily tell if the switch
starts flooding by looking at the traffic lights.

Also, during all of the above scenarios, when the switch is under
high load, attempt to manage the switch (assuming it is a managed
switch) via an ethernet connection, or via the serial port. If the
management interface doesn't essentially cease to respond during
wide open tests, particularly multicast tests, that is rare.

If it is a stackable switch, you want to extend the above to put
as much traffic across the stack links as possible.

If it supports VLANs, and all the other goodies, there are endless
variations and additional items to test. The ones above are
important, I have seen some less than wonderful switches actually
start randomly dropping links when the traffic levels on all
the ports are near wirespeed.

Also, make sure you throw in some 100mbit and 10mbit (if that's
a possibility) into the mix to make sure it handles that properly.

This is by NO means an exhaustive test as described above, but it
does represent a very important (and often omitted) set of test
cases for a much larger full test of a switch. Many of the
switches on the market today would never have made it out the
door in their present form if these tests had been run on them
before their RTS date.

> I'm looking at about $10k total. I can just imagine the sale FUD if I
> was looking to spend $250K or $2.5M. Almost like the old mainframe days
> when IBM and others (some still do) made you sign licensing agreements
> where you would never make public any benchmarks you might do.

Because they didn't want to take the risk that their next customer
might find out that their hardware was slower then the next guy's.

--
Randy Howard (2reply remove FOOBAR)
"Making it hard to do stupid things often makes it hard
to do smart ones too." -- Andrew Koenig
Anonymous
March 4, 2005 1:42:13 AM

Archived from groups: comp.dcom.lans.ethernet (More info?)

>>HP is having a 30 day return policy for up to 2 of some models of their
>>switches just now. I need 4 for one office but am buying 2 just to run
>>them through their paces before I jump in with both feet.
>
> An excellent plan. Be sure to run all ports in parallel, not just
> test a few of them at a time. Obviously, for this to work best, you
> need stress and performance measurement software than can achieve
> wirespeed (and hardware fast enough to get there). To verify this,

I don't know that I have time (or the client's dimes) for all you
suggest. To some degree we're future buying. These things will be 1/2
the size of the HP 4000s they'll replace and due to an office
rearrangement we now have 4 clusters of "stuff" vs 3 in the past. If
nothing else I plan to do a server to server duplication of the 40,000
word, excel, cad, photoshop, etc... files, run a backup to tape, and
have 1/2 the remaining watching videos from the internet with the rest
watching movies off another server. Just to see if the switches will
crash. This should deal with our "normal" load for a while. Each of the
things I mentioned will move data across the network and I'll do it so
that one run pounds the switch to switch setup while the other stays in
the switch. Then I'll move on to some of the other setups you mentioned.

It's hard to find the time to test VLANs past what we plan to do. As you
noted.

:) 
Anonymous
March 4, 2005 3:31:38 AM

Archived from groups: comp.dcom.lans.ethernet (More info?)

David Ross wrote:

> Walter Roberson wrote:
>
>> In article <ZJydnU463Jm1G7rfRVn-qQ@portbridge.com>,
>> David Ross <news02@raleighthings.com> wrote:
>> :> Just a comment but Consumer Reports as a matter of policy buys
>> :> everything they test through the normal purchasing channels,
>>
>> :And if they buy rev 2 because where they bought is the last part of the
>> :country to get the rev 3 units, it's worse than useless.
>>
>> CU deliberately spreads their buying across the country.
>
> But their dependence on single point per model or even manufacturer
> means a good shopper treats CU the same. As a single point of data in
> the buying experience.

They seldom provide a "single point per manufacturer". In most tests there
are multiple models by a given manufacturer and they also have owner-survey
results.

> And let's not even get into the "political bias" they bring into their
> "unbiased" testing. What you test FOR and how you weight the tests is a
> major bias they don't discuss.

Their methodology is another story. Sometimes it's good, sometimes it's
not.

--
--John
to email, dial "usenet" and validate
(was jclarke at eye bee em dot net)
Anonymous
March 4, 2005 4:32:13 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <d08rrp01915@news1.newsguy.com>,
jclarke.usenet@snet.net.invalid says...

> > And let's not even get into the "political bias" they bring into their
> > "unbiased" testing. What you test FOR and how you weight the tests is a
> > major bias they don't discuss.
>
> Their methodology is another story. Sometimes it's good, sometimes it's
> not.

One only has to look as far as their legal loss in court to Bose over
them actually having the temerity tp post a legitimate evaluation of
the horrific frequency response characteristics of a particular model
of Bose speakers, and then the subsequent 100% laudatory remarks upon
Bose products after that legal disaster to realize they are not worth
reading.

"If you sue us, we'll give you a good review, even though your
product is abysmal."

--
Randy Howard (2reply remove FOOBAR)
"Making it hard to do stupid things often makes it hard
to do smart ones too." -- Andrew Koenig
Anonymous
March 4, 2005 4:32:14 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

Randy Howard wrote:

> In article <d08rrp01915@news1.newsguy.com>,
> jclarke.usenet@snet.net.invalid says...
>
>> > And let's not even get into the "political bias" they bring into their
>> > "unbiased" testing. What you test FOR and how you weight the tests is a
>> > major bias they don't discuss.
>>
>> Their methodology is another story. Sometimes it's good, sometimes it's
>> not.
>
> One only has to look as far as their legal loss in court to Bose

What "legal loss in court to Bose is this"? If you are referring to the
1983 case then Consumer Reports took it to the Supreme Court and whupped
Bose's ass.

> over
> them actually having the temerity tp post a legitimate evaluation of
> the horrific frequency response characteristics of a particular model
> of Bose speakers, and then the subsequent 100% laudatory remarks upon
> Bose products after that legal disaster to realize they are not worth
> reading.
>
> "If you sue us, we'll give you a good review, even though your
> product is abysmal."

Now why would they want to give Bose good reviews after they spent all that
money securing the right to give them bad ones?


--
--John
to email, dial "usenet" and validate
(was jclarke at eye bee em dot net)
Anonymous
March 4, 2005 4:33:34 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <9JudnSSdI-LHLLrfRVn-gQ@portbridge.com>, news02
@raleighthings.com says...
> > :Almost like the old mainframe days
> > :when IBM and others (some still do) made you sign licensing agreements
> > :where you would never make public any benchmarks you might do.
> >
> > Microsoft's .NET EULA has a clause to that effect.
>
> For those old folks around here, remember when the weekly rag,
> (ComputerWolrd?), had an ad every week showing SyncSort wiping out IBM's
> sort. After a few years, IBM posted some ads showing how they had bested
> SyncSort. Then it came out they had rigged the results. Not just a tad,
> but by a huge disparity in how the tests were run. Several folks at IBM
> got spanked over that one.
>
> For those who don't know these references, it's from the late 70s, early
> 80s.

You don't have to go back that far. Remember Apple cooking the
results for G5 performance by using different compiler optimization
settings and hoping that nobody would verify their results? That
was just a couple years ago, IIRC.

--
Randy Howard (2reply remove FOOBAR)
"Making it hard to do stupid things often makes it hard
to do smart ones too." -- Andrew Koenig
Anonymous
March 4, 2005 4:33:35 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

>>For those old folks around here, remember when the weekly rag,
>>(ComputerWolrd?), had an ad every week showing SyncSort wiping out IBM's
>>sort. After a few years, IBM posted some ads showing how they had bested
>>SyncSort. Then it came out they had rigged the results. Not just a tad,
>>but by a huge disparity in how the tests were run. Several folks at IBM
>>got spanked over that one.
>>
>>For those who don't know these references, it's from the late 70s, early
>>80s.
>
>
> You don't have to go back that far. Remember Apple cooking the
> results for G5 performance by using different compiler optimization
> settings and hoping that nobody would verify their results? That
> was just a couple years ago, IIRC.
>
The interesting thing about the SyncSort episode was that ComputerWorld
(?) was read by nearly EVERYONE at the time and there seemed to be a
different ad each week showing a different trouncing of IBM sort by sync
sort. And this went on for YEARS. Literally years. That ad series ran
longer than the life span of most software these days. :) 
Anonymous
March 5, 2005 8:01:57 AM

Archived from groups: comp.dcom.lans.ethernet (More info?)

Randy Howard wrote:
> You don't have to go back that far. Remember Apple cooking the
> results for G5 performance by using different compiler optimization
> settings and hoping that nobody would verify their results? That
> was just a couple years ago, IIRC.

Or the unamed video card vendor who put the PC Magazine benchmark into
hardware. That was pretty funny. "WOW..this card is out of this
world!" "Wait...why, then, does it blow chunks in real world apps.."

--

hsb


"Somehow I imagined this experience would be more rewarding" Calvin
**************************ROT13 MY ADDRESS*************************
Due to the volume of email that I receive, I may not not be able to
reply to emails sent to my account. Please post a followup instead.
********************************************************************
Anonymous
March 5, 2005 5:09:28 PM

Archived from groups: comp.dcom.lans.ethernet (More info?)

Hansang Bae wrote:

> Or the unamed video card vendor who put the PC Magazine benchmark into
> hardware.  That was pretty funny.  "WOW..this card is out of this
> world!"  "Wait...why, then, does it blow chunks in real world apps.."

That also used to be a favourite trick with mother boards. Another was
"Winmarks". Someone would compare their 386 system to a 6 MHz IBM AT with
a 286 CPU and find it ran (for instance) 20 times faster. They'd then
claim they had a 120 MHz CPU!
!