120 Hosts Running GigE at Wire Speed Minimum Cost

G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

What is the minimum cost solution for running 120 hosts at wire speed on
GigE? I am thinking that something like two used Foundry or Extreme
switches would do this at lowest cost.

--
Will
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

"Will" <DELETE_westes@earthbroadcast.com> wrote:
>What is the minimum cost solution for running 120 hosts at wire speed on
>GigE?

What are you going to _do_ with 50 terabytes per hour?
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <s4t021dld1n0kmj1ojp4pl2t94spv5imbs@4ax.com>,
<William P.N. Smith> wrote:
>"Will" <DELETE_westes@earthbroadcast.com> wrote:
>>What is the minimum cost solution for running 120 hosts at wire speed on
>>GigE?
>
>What are you going to _do_ with 50 terabytes per hour?
>


You first cost may be buying new hosts. A good desktop PC can't fill
a GbE pipe. Or so I'm told.


--

a d y k e s @ p a n i x . c o m

Don't blame me. I voted for Gore.
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

adykes@panix.com (Al Dykes) wrote:
>To be fair, the OP didn't say "desktop", he didn;t say anything.

True, we're getting off the original subject. I doubt there's a
machine in existance that'll do "wire speed" and do anything useful
with it, though, so now we're left wondering how far off "wire speed"
we can be and still meet the OP's requirements. My 913 megabits was
regular desktop machines talking thru two D-Link DGS-1005D switches,
but the OP wants 120 machines. If there are no other criteria and
this is a homework assignment, then 60 of those at $60 each will
satisfy the criteria. Of course, in that case you don't even need the
switches, so just cabling the machines together will work... 8*}
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <cvpuj2$98u$1@panix5.panix.com>, adykes@panix.com says...
> In article <s4t021dld1n0kmj1ojp4pl2t94spv5imbs@4ax.com>,
> <William P.N. Smith> wrote:
> >"Will" <DELETE_westes@earthbroadcast.com> wrote:
> >>What is the minimum cost solution for running 120 hosts at wire speed on
> >>GigE?
> >
> >What are you going to _do_ with 50 terabytes per hour?
> >
>
>
> You first cost may be buying new hosts. A good desktop PC can't fill
> a GbE pipe. Or so I'm told.

Many can, but not with conventional "off the shelf" applications.
Disk I/O is usually a major factor, unless you're just beaming
data to/from RAM for fun.

--
Randy Howard (2reply remove FOOBAR)
"Making it hard to do stupid things often makes it hard
to do smart ones too." -- Andrew Koenig
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

William P.N. Smith wrote:

>>What is the minimum cost solution for running 120 hosts at wire speed on
>>GigE?
>
> What are you going to do with 50 terabytes per hour?

Attempt to keep up with the Windows viruses ;-)
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

The end user is building a major animation film. Each of 120 workstations
brings a 100GB file to its local file system, processes it for whatever
reason, and then uploads it back to a common server.

Rather than methodically isolate every bottleneck in the application, I
would like to focus this conversation on one of the many bottlenecks, and
that is the network itself. Personally I think the biggest bottleneck is
disk I/O on the server, but that's a different thread. I just want to make
sure that the network itself doesn't become a bottleneck.

--
Will


<William P.N. Smith> wrote in message
news:s4t021dld1n0kmj1ojp4pl2t94spv5imbs@4ax.com...
> "Will" <DELETE_westes@earthbroadcast.com> wrote:
> >What is the minimum cost solution for running 120 hosts at wire speed on
> >GigE?
>
> What are you going to _do_ with 50 terabytes per hour?
>
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

Randy Howard <randyhoward@fooverizonbar.net> wrote:
>> You first cost may be buying new hosts. A good desktop
>> PC can't fill a GbE pipe. Or so I'm told.

> Many can, but not with conventional "off the shelf"
> applications. Disk I/O is usually a major factor,
> unless you're just beaming data to/from RAM for fun.

RAM-to-RAM is a big application for compute clusters.

AFAIK, most desktops cannot get GbE wirespeed, unless
their controller is on something faster than a PCI bus.
The usual limit there is around 300 Mbit/s, mostly
caused by limited PCI burst length and long setup.

-- Robert
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <p9idnZri3cUevL3fRVn-2w@giganews.com>,
Will <DELETE_westes@earthbroadcast.com> wrote:
:What is the minimum cost solution for running 120 hosts at wire speed on
:GigE? I am thinking that something like two used Foundry or Extreme
:switches would do this at lowest cost.

Amazing coincidence that the Foundry FastIron II just -happens-
to be rated for exactly 120 wire speed gigabit ports.

Somehow, in my network, we never happen to have nice round multiples
of 12 -- we end up with (e.g.) 79 hosts in a wiring closet,
plus a couple of uplinks.

Odd too that one would have 120 gigabit wirespeed hosts in one place
and not be interested in adding a WAN connection, and not be interested
in redundancy...

======
One must be careful with modular architectures, in that often the
switching speed available between modules is not the same as the
switching speed within the same module.
--
IMT made the sky
Fall.
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

FastIron II doesn't support 120 optical ports, so the backplane speed isn't
all that interesting. Sure you could have a tree of switches, but in this
case the 120 hosts happen to all be in racks in the same room, and that's
why I thought an Extreme BlackDiamond or Foundry BigIron might give plenty
of horsepower at neglible cost (assuming you buy used).

--
Will


"Walter Roberson" <roberson@ibd.nrc-cnrc.gc.ca> wrote in message
news:cvq7qk$75t$1@canopus.cc.umanitoba.ca...
> In article <p9idnZri3cUevL3fRVn-2w@giganews.com>,
> Will <DELETE_westes@earthbroadcast.com> wrote:
> :What is the minimum cost solution for running 120 hosts at wire speed on
> :GigE? I am thinking that something like two used Foundry or Extreme
> :switches would do this at lowest cost.
>
> Amazing coincidence that the Foundry FastIron II just -happens-
> to be rated for exactly 120 wire speed gigabit ports.
>
> Somehow, in my network, we never happen to have nice round multiples
> of 12 -- we end up with (e.g.) 79 hosts in a wiring closet,
> plus a couple of uplinks.
>
> Odd too that one would have 120 gigabit wirespeed hosts in one place
> and not be interested in adding a WAN connection, and not be interested
> in redundancy...
>
> ======
> One must be careful with modular architectures, in that often the
> switching speed available between modules is not the same as the
> switching speed within the same module.
> --
> IMT made the sky
> Fall.
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

"Will" <DELETE_westes@earthbroadcast.com> wrote:
>The end user is building a major animation film. Each of 120 workstations
>brings a 100GB file to its local file system, processes it for whatever
>reason, and then uploads it back to a common server.

So you need a server with a 120 gigabit NIC, and a server port on your
switch of the same speed?

Again, if 90% is good enough, then SOHO unmanaged switches are good
enough. If the network is faster than your disks, why spend any brain
cycles on how many nines you can get out of your network?

You are talking millions of dollars worth of hardware, why ask this
kind of question on Usenet? [FWIW, the upload-process-download thing
sounds really inefficient...]
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

You are assuming that there is one file server. That would be the worst
possible design, right?

Regarding USENET, you are assuming that this is the only input the design
process? You are assuming that no one on USENET could possibly have one
even microscopically significant idea that might improve any aspect of the
design? Pretty pessimistic assessment of the medium in which you are
participating.... Considering that the cost is next to zero, if you get
nothing you have lost nothing. And if you get even one good idea, you got
the idea at an excellent cost-benefit ratio. The fact that others now
benefit from the exchange, now and in the future, creates benefits for the
larger audience with access to USENET.

Your point that the workstations have local disks that are slower than the
network is a point well-taken. But the disks are capable of better than
10/100 100BaseT speeds, so gigE just happens to be the next step up that
bypasses that particular bottleneck. And these days gigE is cheap.

--
Will


<William P.N. Smith> wrote in message
news:r67221do9o7pe3nj24oina3om2ssfimopb@4ax.com...
> So you need a server with a 120 gigabit NIC, and a server port on your
> switch of the same speed?
>
> Again, if 90% is good enough, then SOHO unmanaged switches are good
> enough. If the network is faster than your disks, why spend any brain
> cycles on how many nines you can get out of your network?
>
> You are talking millions of dollars worth of hardware, why ask this
> kind of question on Usenet? [FWIW, the upload-process-download thing
> sounds really inefficient...]
>
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

Will wrote:
> The end user is building a major animation film. Each of 120
> workstations brings a 100GB file to its local file system, processes
> it for whatever reason, and then uploads it back to a common server.
>
> Rather than methodically isolate every bottleneck in the application,
> I would like to focus this conversation on one of the many
> bottlenecks, and that is the network itself. Personally I think
> the biggest bottleneck is disk I/O on the server, but that's a
> different thread. I just want to make sure that the network itself
> doesn't become a bottleneck.

In that case, Force10 and Extreme would be worth a look. But if you're
comfortable with Cisco hardware, you may want to look there as well.
From *pure* performance standpoint, Cisco may come in 2nd or 3rd, but
they have a large support infrastructure. But of course, they won't be
cheap.

--

hsb


"Somehow I imagined this experience would be more rewarding" Calvin
**************************ROT13 MY ADDRESS*************************
Due to the volume of email that I receive, I may not not be able to
reply to emails sent to my account. Please post a followup instead.
********************************************************************
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <R46dneq6oYqejLzfRVn-tw@giganews.com>,
Will <DELETE_westes@earthbroadcast.com> wrote:
:FastIron II doesn't support 120 optical ports,

Your posting asked for the 'minimum cost solution'. Optical is not
going to be the minimum cost solution if the hosts are within 100m
of the server.

If you have constraints such as "optical" then you should state
them upfront -- and even then you should be specific about whether,
e.g., you are looking for 100 FX connectors or GBIC or SFP.


:Sure you could have a tree of switches, but in this
:case the 120 hosts happen to all be in racks in the same room,

So you don't need 120 ports, you need 120 ports plus 1 per server
plus enough for interconnects plus some number more for connections
to the Internet (or to some other equipment used to create copies
of the data to deliver it to customers); possibly plus more for
backup hosts.
--
We don't need no side effect-ing
We don't need no scope control
No global variables for execution
Hey! Did you leave those args alone? -- decvax!utzoo!utcsrgv!roderick
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <qLmdnSIR05P2v7zfRVn-uw@giganews.com>,
Will <DELETE_westes@earthbroadcast.com> wrote:
:Clearly the disk and network I/O bottlenecks at the file servers are big.
:But that's another thread.

Excuse me, but that *isn't* "another thread". The process you
describe involves negligable communications between the hosts. This
makes a big difference in the choice of equipment.

If your setup is such that there could be N simultaneous connections
to M servers, and N > M and you are asking us for a design in which
"the network itself is not a bottleneck", then you have an implicit
requirement that the server port must be able to operate at
somewhere between (ceiling(N/M) * 1 Gbps) and (N * 1 Gpbs), depending
on the traffic patterns. We have to know what that peak rate is
in order to advise you on the correct switch. Current off-the-shelf
technologies get you 1 Gbps interfaces on a wide range of
devices, 10 Gbps XENPAK interfaces on a much lesser range of devices;
2 Gbps interfaces are also available in some models -- but if that's
your spec then we need to know so that we rule out devices that
can't handle that load.

But perhaps you are planning to get past 1 Gbps by using IEEE 802.3ad
linking of multiple gigabit ports on the server. If that's the case,
then we need to know that so that we know to constrain to 802.3ad
compliant devices. For example, for several years Cisco has had
it's EtherChannel / GigaEtherChannel technology out that allowed
multiple channels to be bonded together, but that technology predates
the 802.3ad standard. Cisco supports 802.3ad in modern IOS versions,
but the cost of upgrading IOS versions on used devices with the
Ompph! you need is very high -- high enough that it can end up being
less expensive to buy -new- switches than "relicensing" and upgrading
software on used ones. Whereas if you don't need 802.3ad, then
used Cisco equipment could potentially be "relicensed" without
software upgrade.


:The only thing I'm concerned about in the
:current thread is how to cheaply guarantee that the network itself is not a
:bottleneck for the servers processing information that they bring down from
:the file servers.

If your server interfaces are going to run at only 1 Gbps, then
in order to "guarantee" that the network is not the bottleneck
in the circumstance that the devices really will run at "wire speed"
you are going to need 120 servers -- an increase which is going to
seriously skew the switch requirements.


The alternative to all of this, if you are content with your
users sharing 1 Gbps to each server, is to recognize that you do
not, in such a case, need to run all the ports at 1 Gbps wire speed
*simultaneously*. That makes a substantial difference in your choices!!

Your initial stated requirement of 120 hosts at gigabit wire speed
implied to us that the switches had to have an (M * 2 Gbps) switching
fabric per module, where M is the number of ports per switching
module, *and* that the backplane fabric speed had to be at least
240 Gbps (in order to handle the worst-case scenario in which
every point is communicating wire rate full duplex with a port on
a different module.) That's a tough requirement to meet for
the backplane -- a requirement that is very much incompatable with
"minimum cost".

If, though, you requirement is really just that one device at a time
must be able to run gigabit wire rate unidirecitonal with one of
the servers -- that the link must have full gigabit available
upon demand but the demands will be infrequent and non-overlapping --
then your backplane only has to be (S * 1 Gbps) where S is the
maximum number of simultaneously active servers you need. If S is,
say, 5, then the equipment you need to fill the requirement is
considerably down-scale from a 240 Gbps bacplane.

If the real requirement is indeed that wire speed point to point must
be available but that few such transfers will need to be done
simultaneously, then you could be potentially be working with something
as low end as a single Cisco 6509 [9 slot chassis] with Supervisor
Engine 1A [lowest available speed] and 8 x WS-X6416-GBIC [each offering
16 GBIC ports]. The module backplane interconnect for the 1A is 8 Gbps,
and the maximum forwarding rate of the modules is 32 Gbps [i.e.,
connections on the same module] when using the 1A, with a shared 32
Gbps bus as the backplane in this configuration. [Note: if such a
configuration was satisfactory and you needed at most 6 Gbps, you could
probably do much the same configuration in a single Cisco 4506 switch.]

But if you were not quite as concerned with minimum cost, then you
could use a Cisco 6506 [6 slot chassis] with Supervisor 720 [fastest
available for the 6500 series] and 3 x WS-X6748-SPF [each offering 48
SPF]. The 6748-SPF has a dual 20 Gbps module interconnect; in
conjunction with the Supervisor 720, you can get up to 720 Gbps in some
configurations. If I read the literature correctly, the base
configuration would get you up to about 240 Gbps and you would add a
WS-F6700-DFC3 distributed switching card to go beyond that, up to 384
Gbps per slot. The 6748-SPF supports frames up to 9216 bytes long.

If you were able to go copper instead of fibre, then you could
use a Cisco 6506 with one of the 48-port 10/100/1000 modules:
- WS-X6758-GE-TX for Supervisor 1A, 2, 32, or 720 (32 Gbps shared bus)
- WS-X6548-GE-TX for Supervisor 1A or 2 (1518 bytes/frame max) (8 Gbps
backplane interconnect)
- WS-X6758-TX for Supervisor 720 (9216 bytes/frame max) [speeds as
noted in above paragraph]

An important point to note about the 16, 24, or 48 port gigabit
Cisco interface cards is that they are all oversubscribed relative to
the backplane interconnect [details about exactly how they share the
bandwidth vary with the card]. That makes these cards totally unsuitable
for the situation where you require that all ports -simultaneously-
be capable of running 1 Gbps to arbitrary other ports, but with
some judicious placement of the server connections can make them
just fine for the situation where you need gigabit wire rate for
any one link but do not need very many such connections simultaneously.
[And if so then the Cisco 4506 with anything other the entry-point
Supervisor might be a contender as well; the entry-point Supervisor is,
if I recall correctly, only usable in the 3-slot chassis, the 4503.]
--
I wrote a hack in microcode,
with a goto on each line,
it runs as fast as Superman,
but not quite every time! -- Don Libes et al.
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

"Will" <DELETE_westes@earthbroadcast.com> top-posted:
>You are assuming that there is one file server. That would be the worst
>possible design, right?

Well, it's not inconsistent with the design details you've given us.
8*)

>You are assuming that no one on USENET could possibly have one
>even microscopically significant idea that might improve any aspect of the
>design?

Not at all, there are some really clever people here, including those
who helped design Ethernet. I'm more thinking along the lines of "Ask
not of Usenet, for it will tell you Yes, and No, and everything in
between."

>And if you get even one good idea, you got
>the idea at an excellent cost-benefit ratio.

True, if your time is worth nothing. 8*)

>Your point that the workstations have local disks that are slower than the
>network is a point well-taken. But the disks are capable of better than
>10/100 100BaseT speeds, so gigE just happens to be the next step up that
>bypasses that particular bottleneck. And these days gigE is cheap.

Sure, but my point is that _any_ GigE hardware will meet your
criteria, and everytime I hear someone ask for "wire-speed" I know at
least that they don't understand their problem. Present company
excluded, of course.
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <p0a421d23c2e97hfi1n2sh8142oe9v9ck3@4ax.com>,
<William P.N. Smith> wrote:
:Sure, but my point is that _any_ GigE hardware will meet your
:criteria,

Not if you oversubscribe a Cisco 4500, 5000, 6000, or 6500...

--
Contents: 100% recycled post-consumer statements.
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

Will wrote:
> Personally I think the biggest bottleneck is
> disk I/O on the server,

Personally, I think you are right about that! :)
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <Td1Ud.57537$iC4.28684@newssvr30.news.prodigy.com>,
redelm@ev1.net.invalid says...
> Randy Howard <randyhoward@fooverizonbar.net> wrote:
> >> You first cost may be buying new hosts. A good desktop
> >> PC can't fill a GbE pipe. Or so I'm told.
>
> > Many can, but not with conventional "off the shelf"
> > applications. Disk I/O is usually a major factor,
> > unless you're just beaming data to/from RAM for fun.
>
> RAM-to-RAM is a big application for compute clusters.
>
> AFAIK, most desktops cannot get GbE wirespeed, unless
> their controller is on something faster than a PCI bus.

Some have Gig down, with it wired in as PCI-X. What
percentage of desktops have good gigE implementations I
can't answer.

> The usual limit there is around 300 Mbit/s, mostly
> caused by limited PCI burst length and long setup.

I've seen in the neighorhood of 1800 Mbit/s (FDX) on
a variety of PCI-X gigE implementations. The trick
is to open multiple connections and use overlapped/
threading techniques to keep the pipe full.

When you do this on a gig switch with a series of
"pairs", or with a fanout test with multiple clients
all going into one fast server on one of the ports,
each beaming data back and forth wide open, you can
watch the switch melt down in a lot of cases.

This is of course the fallacy of believing snake oil
like Tolly reports. I've seen 8-port gig-E switches
that have passed Tolly testing start dropping link
randomly in minutes under this type of test. I've also
seen cheap $89 5-port gig switches run the same test,
at slightly better throughput for a week solid without
hiccup.

Most of the low-cost high port-count switches (24,
48) will not take kindly to you trying to run all
the ports wide open simultaneously. This has nothing
to do with the presence or absence of a published claim
to be a non-blocking switch.

Further, those that are managed switches will have
their management interfaces cease being responsive
at all under this type of load.

You can achieve the same thing with multicast load
on IGMP switches, they'll work for a brief period
sending the stream only to subscribed ports, then
suddenly start flooding the traffic to all the ports.

Apparently there isn't any money in vendors publishing
REAL stress tests on switches, because far too many
of them would fail.

--
Randy Howard (2reply remove FOOBAR)
"Making it hard to do stupid things often makes it hard
to do smart ones too." -- Andrew Koenig
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

Randy Howard wrote:

> You can achieve the same thing with multicast load
> on IGMP switches, they'll work for a brief period
> sending the stream only to subscribed ports, then
> suddenly start flooding the traffic to all the ports.

I can vouch for GSM7312's (and gsm7324's) doing this. Their layer 2
stuff as well - gsm712, fsm726s, etc. Do NOT attempt to push out a GHOST
image over multicast from a gig host if you use these - the switch WILL
melt down.
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <1126gtv5bra9lf3@news.supernews.com>,
T. Sean Weintz <strap@hanh-ct.org> wrote:
>Randy Howard wrote:
>
>> You can achieve the same thing with multicast load
>> on IGMP switches, they'll work for a brief period
>> sending the stream only to subscribed ports, then
>> suddenly start flooding the traffic to all the ports.
>
>I can vouch for GSM7312's (and gsm7324's) doing this. Their layer 2
>stuff as well - gsm712, fsm726s, etc. Do NOT attempt to push out a GHOST
>image over multicast from a gig host if you use these - the switch WILL
>melt down.


What's the symptom ? Lots of dropped packets ? Lockup ?

(I'm assuming the references to smoke in this thread are
metaphorical.)

--

a d y k e s @ p a n i x . c o m

Don't blame me. I voted for Gore.
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <nl7121hgb3e8jcp6n0n3chhfoij8f7q9tf@4ax.com>, William P.N.
Smith says...
> adykes@panix.com (Al Dykes) wrote:
> >To be fair, the OP didn't say "desktop", he didn;t say anything.
>
> True, we're getting off the original subject. I doubt there's a
> machine in existance that'll do "wire speed" and do anything useful
> with it, though,

If you define "do something useful with it" as processing it all and
the writing it all to disk, then if it won't work right now, it's not
far off. There are several varieties of storage controller/drive
combos that can achieve r/w throughput in excess of 125MB/s. They're
not cheap, and they're not on desktops typically, but it can be done.

Depending upon CPU horsepower and the quality of the network driver,
it can be done. There are some systems that can handle "wire speed"
both directions, I.e. FDX (2Gbps).

> so now we're left wondering how far off "wire speed" we can be and
> still meet the OP's requirements.

Odds are the link is faster than any app likely to be used already.
There are some specific gig lan drivers which generate insanely
high CPU loads for a given throughput, so it's not generically
answerable.

> My 913 megabits was regular desktop machines talking thru two
> D-Link DGS-1005D switches, but the OP wants 120 machines.

IOW, he's serious about it.

> If there are no other criteria and this is a homework assignment,
> then 60 of those at $60 each will satisfy the criteria.

Unlikely. When you start daisy chaining switches, the numbers
don't stand up.

> Of course, in that case you don't even need the switches, so
> just cabling the machines together will work... 8*}

How do you "just cable 120 machines together" without switches ???

--
Randy Howard (2reply remove FOOBAR)
"Making it hard to do stupid things often makes it hard
to do smart ones too." -- Andrew Koenig
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

Randy Howard wrote:

> In article <nl7121hgb3e8jcp6n0n3chhfoij8f7q9tf@4ax.com>, William P.N.
> Smith says...
>> adykes@panix.com (Al Dykes) wrote:
>> >To be fair, the OP didn't say "desktop", he didn;t say anything.
>>
>> True, we're getting off the original subject. I doubt there's a
>> machine in existance that'll do "wire speed" and do anything useful
>> with it, though,
>
> If you define "do something useful with it" as processing it all and
> the writing it all to disk, then if it won't work right now, it's not
> far off. There are several varieties of storage controller/drive
> combos that can achieve r/w throughput in excess of 125MB/s. They're
> not cheap, and they're not on desktops typically, but it can be done.
>
> Depending upon CPU horsepower and the quality of the network driver,
> it can be done. There are some systems that can handle "wire speed"
> both directions, I.e. FDX (2Gbps).
>
>> so now we're left wondering how far off "wire speed" we can be and
>> still meet the OP's requirements.
>
> Odds are the link is faster than any app likely to be used already.
> There are some specific gig lan drivers which generate insanely
> high CPU loads for a given throughput, so it's not generically
> answerable.
>
>> My 913 megabits was regular desktop machines talking thru two
>> D-Link DGS-1005D switches, but the OP wants 120 machines.
>
> IOW, he's serious about it.
>
>> If there are no other criteria and this is a homework assignment,
>> then 60 of those at $60 each will satisfy the criteria.
>
> Unlikely. When you start daisy chaining switches, the numbers
> don't stand up.
>
>> Of course, in that case you don't even need the switches, so
>> just cabling the machines together will work... 8*}
>
> How do you "just cable 120 machines together" without switches ???

Two NICs each and configure each as a bridge <eg>.

>

--
--John
to email, dial "usenet" and validate
(was jclarke at eye bee em dot net)
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <qLmdnSIR05P2v7zfRVn-uw@giganews.com>,
DELETE_westes@earthbroadcast.com says...
> Clearly the disk and network I/O bottlenecks at the file servers are big.
> But that's another thread.

Right, start with dedicated storage controllers and lots and lots of
spindles. HP SmartArray hardware is a good place to start looking.

> The only thing I'm concerned about in the current thread is how to
> cheaply guarantee that the network itself is not a bottleneck for the
> servers processing information that they bring down from the file servers.

Alacritech seems to have the best CPU load per (whatever unit of
transfer you like) of the current Gigabit ethernet adapters. The
last I looked, they only supported Windows platforms though, which
may or may not be an issue for you. That will help keep the network
I/O from getting in the way of system work being done.

You are probably correct that a large, high-end non-blocking switch
is what you need *IF* everybody is sending and receiving in parallel
all the time. If the workstations are randomly hitting the server,
at intervals, it might not be such a problem. Odds are in such
a scenario that the network will be far less than fully utilized
while the disk controllers (on both the server and workstation
sides) will be firewalled fairly often.

Why not build a small 30-node test bed, using a couple of 16-port
Netgear gig-E switches (about $800 in hardware) and running some
simulated load testing to see where the bottleneck is before buying
an expensive switch only to find out that you should be spending
money on storage hardware instead?

--
Randy Howard (2reply remove FOOBAR)
"Making it hard to do stupid things often makes it hard
to do smart ones too." -- Andrew Koenig
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <cvvdgl0dr0@news2.newsguy.com>,
jclarke.usenet@snet.net.invalid says...
> Randy Howard wrote:

> >> Of course, in that case you don't even need the switches, so
> >> just cabling the machines together will work... 8*}
> >
> > How do you "just cable 120 machines together" without switches ???
>
> Two NICs each and configure each as a bridge <eg>.

Ugh.

--
Randy Howard (2reply remove FOOBAR)
"Making it hard to do stupid things often makes it hard
to do smart ones too." -- Andrew Koenig