Archived from groups: comp.dcom.lans.ethernet (
More info?)
In article <cslu1j$f1g$1@news-int2.gatech.edu>,
Bob <bob.public@mindspring.com> wrote:
>
> I think there is a lot of misunderstanding (or maybe different
> understandings) about what is really meant by real-time. Most seem to
> think that real-time means high speed or high performance. What it
> really means is *guaranteed* performance.
If by "guaranteed" you mean 100.0%, then no communications system in the
real world can meet the criterion. If you are willing to live with less
than "guaranteed," then the question becomes what level of imperfection
is acceptable (i.e., "how many nines?").
> Most are soft real-time systems that can (or should) easily tolerate an
> occasional dropped packet.
Not "can" or "should", but *must*. Since communication errors are
inevitable (ultimately, it comes down to thermal noise), there will
always be some residual level of packet loss.
> I, like a lot of developers, successfully
> use ethernet for real-time applications every day.
Which was exactly my point. To say that Ethernet and CSMA/CD are not
suitable for real-time communications is belied by the fact that we use
it for such purposes all the time.
> Most of the time
> using something like token ring for deterministic performance just ain't
> worth the hassle.
Ignoring for the moment the implied assumption that somehow
"deterministic performance" is a requirement of a real-time application,
there is a common misconception that Token Ring is in fact
deterministic, in that one can predict its delay and/or throughput in
advance of frame transmission. This is pure propaganda once used by IBM
(and others) to try to dissuade users from deploying Ethernet networks,
under the claim that somehow you would get more reliable and predictable
macro-performance from Token Ring (i.e., performance as seen by the user
of an application).
Token Ring is "deterministic" only in a theoretical, abstract sense,
using "textbook-style" assumptions. In reality, what an application
cares about is the time delay between queueing a frame for transmission
on the LAN, and reception of that frame by the intended recipient. Just
as with Ethernet, Token Ring cannot put any "guaranteed bound" on this
time, due to a number of factors:
(1) As discussed above, frames will always be discarded due to
communications errors. The error rate on a Token Ring LAN is essentially
the same as for an Ethernet LAN, as they use similar encoding, cables,
etc.
(2) While textbooks all discuss the "bounded time" for circulation of a
token, they rarely discuss the disruption caused by insertion/removal of
stations on the ring. Since any practical network will have workstations
regularly powering up/down during operation, there will be numerous
disruptions of the ring during application execution. Each such
disruption will normally require a "ring recovery", whereby a token must
be regenerated, and perhaps a selection of a new Active Monitor, etc.
This will incur a delay orders of magnitude greater than the "expected"
token rotation time.
(3) Token Ring advocates like to tout the availability of low-level
priority access mechanisms, whereby certain stations or applications can
assert a higher claim to use a token than others. Of course, any
application that is not running at the highest priority level can never
ensure that it will *ever* see a free token, much less one within some
bounded time. That is, "deterministic" behavior exists (if it exists at
all) only for the highest priority level. Since *everyone* wants high
priority (who ever says that their network application is less important
than someone else's?), the result is that all applications always run at
the highest priority, making the entire priority scheme moot.
(4) Finally (and perhaps the most important), it is rarely the case that
a given station has only *one* active application using the network at
any time. All network applications running on the station are placing
frames into the transmit queue for the network interface, independently
from each other. Thus, even if: (a) there are no errors, (b) the ring is
not disrupted, and (c) the application is running at the highest
priority, putting a frame on the transmit queue does NOT ensure that it
will be transmitted (much less received) within the "bounded" token
rotation time. All of the frames ahead of it in the queue (placed there
by other applications) will be transmitted first.
It becomes clear that any "determinism" theoretically present in Token
Ring is useless in a practical sense, since applications cannot use such
characteristics to their advantage. It may make for nice classroom
discussions, but the effects are lost in real-world systems.
--
Rich Seifert Networks and Communications Consulting
21885 Bear Creek Way
(408) 395-5700 Los Gatos, CA 95033
(408) 228-0803 FAX
Send replies to: usenet at richseifert dot com