Maximum size of Ethernet frame

mark

Distinguished
Mar 30, 2004
2,613
0
20,780
Archived from groups: comp.dcom.lans.ethernet (More info?)

I am wondering why was the maximum size of ethernet data payload
restricted to 1500 bytes by the standard even though the length field
is 2-bytes? (1500 is not even a multiple of 2!).

thanks,
Steve
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <c8e879a1.0411110640.6dd76c20@posting.google.com>,
steven_mark_99@yahoo.com (Mark) wrote:

> I am wondering why was the maximum size of ethernet data payload
> restricted to 1500 bytes by the standard even though the length field
> is 2-bytes? (1500 is not even a multiple of 2!).
>

The 1500 byte payload limit was somewhat arbitrary. *Some* upper limit
is needed for a number of reasons:

-The longer the maximum frame allowed, the longer the maximum delay on a
shared medium. All stations must wait for a frame-in-progress to
complete before attempting their own transmission; longer frames means
longer wait time.

-Longer frames increases the probability that one or more bits in the
frame will be received in error, necessitating retransmission of the
frame. (In the extreme case, an infinitely-long frame is *guaranteed* to
contain bit errors, ensuring that it would *never* be correctly
received!)

-A longer maximum frame increases the memory requirement for a NIC using
a simple, fixed buffer design. This is the *real* reason for the 1500
byte limit; at the time we designed it (1979), buffer memory was much
more expensive than it is now, and DMA controllers were too complex to
be implemented in anything less than a full-custom chip.


--
Rich Seifert Networks and Communications Consulting
21885 Bear Creek Way
(408) 395-5700 Los Gatos, CA 95033
(408) 228-0803 FAX

Send replies to: usenet at richseifert dot com
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

Mark wrote:

> I am wondering why was the maximum size of ethernet data payload
> restricted to 1500 bytes by the standard even though the length field
> is 2-bytes? (1500 is not even a multiple of 2!).
>

Beyond 1500 (actually 1518 IIRC), it is no longer a length field. If 1536
or greater, it's a type field. Not sure, if in between.
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

Rich Seifert wrote:

(snip regarding ethernet 1500 byte limit)

> The 1500 byte payload limit was somewhat arbitrary. *Some* upper limit
> is needed for a number of reasons:

> -The longer the maximum frame allowed, the longer the maximum delay on a
> shared medium. All stations must wait for a frame-in-progress to
> complete before attempting their own transmission; longer frames means
> longer wait time.

It would seem, then, that 1024 or 2048 would have been more
convenient for some systems, but then that doesn't leave any space
for headers of whatever protocol is in use.

Also, assuming hardware buffers made of commercial SRAM, which
tend to be powers of two, there is controller overhead. A 2048
byte payload would require a 4096 byte, minimum, RAM buffer.

1500 is convenient in allowing 1024 bytes, plus some layers
of other headers.

-- glen
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

Bonjour Rich,

Interresting reply, I just comment that.

> -Longer frames increases the probability that one or more bits in the
> frame will be received in error, necessitating retransmission of the
> frame. (In the extreme case, an infinitely-long frame is *guaranteed* to
> contain bit errors, ensuring that it would *never* be correctly
> received!)

There is no difference between one error in the payload of one Ethernet
frame, or one error in the payloads of 50 successive Ethernet frames
carrying the same application layer message. At the application layer, the
reassembled message could be discarded.

The problem with the global Ethernet stack (SNAP > LLC > MAC > xxBasex) is
that we dont't have a complete adaptation layer. The Ethernet frame limit is
reported to the 3-layer because there is no way to detect a floating 3-layer
header in the Ethernet payload (each SNAP payload always begins by a level-3
header).

Regards,
Michelot
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

Michelot wrote:

> There is no difference between one error in the payload of one Ethernet
> frame, or one error in the payloads of 50 successive Ethernet frames
> carrying the same application layer message. At the application layer, the
> reassembled message could be discarded.
>

I thought TCP sliding window acknowledged the last point of contiguous
successful transmission, so any retransmission could start from that point.
If only on or a few packets were lost, as soon as they're received
correctly, the acks could jump to the end of the sucessfully received data.

i.e. If only packet 6 of 10 were lost, the acks would initially show 5,
then as soon as 6 is sucessfully received, the ack could jump to show all
up to 10 were received correctly.
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <41948af1$0$10649$8fcfb975@news.wanadoo.fr>,
"Michelot" <mhostettler@wanadooNOSPAM.fr> wrote:

>
> There is no difference between one error in the payload of one Ethernet
> frame, or one error in the payloads of 50 successive Ethernet frames
> carrying the same application layer message. At the application layer, the
> reassembled message could be discarded.
>

There is a BIG difference. Let's say that I want to send 1 million bytes
of application data. If I send it as 500 blocks of 2K bytes each, and
there is an error in one of the frames, a single 2K block must be
retransmitted (typically, at the Transport layer).

If instead I send the data as a single message of 1 million bytes and
there is an error in the frame (which I agree has the same probability
of containing an error as the 500 blocks of 2K bytes), then I must
retransmit the ENTIRE 1 million bytes over again. That's a severe
penalty for a single bit error.

> The problem with the global Ethernet stack (SNAP > LLC > MAC > xxBasex) is
> that we dont't have a complete adaptation layer. The Ethernet frame limit is
> reported to the 3-layer because there is no way to detect a floating 3-layer
> header in the Ethernet payload (each SNAP payload always begins by a level-3
> header).
>

I am not sure what you are talking about here. SNAP and LLC are rarely
used, particularly not in a TCP/IP context. (They are used for things
like AppleTalk, and NetBIOS/NetBEUI, but these are minor players today,
relative to TCP/IP.)

In the more typica, IP-over-Ethernet scheme, we use Type encapsulation,
and the IP header appears immediately following the Protocol Type field;
it does not "float".


--
Rich Seifert Networks and Communications Consulting
21885 Bear Creek Way
(408) 395-5700 Los Gatos, CA 95033
(408) 228-0803 FAX

Send replies to: usenet at richseifert dot com
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

"Rich Seifert" <usenet@richseifert.com.invalid> wrote:

> "Michelot" <mhostettler@wanadooNOSPAM.fr> wrote:
>
>> The problem with the global Ethernet stack (SNAP > LLC > MAC >
>> xxBasex) is
>> that we dont't have a complete adaptation layer. The Ethernet frame
>> limit is
>> reported to the 3-layer because there is no way to detect a floating
>> 3-layer
>> header in the Ethernet payload (each SNAP payload always begins by a
>> level-3
>> header).
>>
>
> I am not sure what you are talking about here. SNAP and LLC are rarely
> used, particularly not in a TCP/IP context. (They are used for things
> like AppleTalk, and NetBIOS/NetBEUI, but these are minor players
> today,
> relative to TCP/IP.)
>
> In the more typica, IP-over-Ethernet scheme, we use Type
> encapsulation,
> and the IP header appears immediately following the Protocol Type
> field;
> it does not "float".

I think what Michelot is saying does not really change whether you use
type or length formats. I think he's saying that when a network layer
protocol is layered over Ethernet (or by the way any other IEEE 802.x
LANs or FDDI), the beginning of the network layer packet must occur
immediately following the Ethernet overhead. Unlike, for example, SONET,
where the next upper layer payload can begin anywhere is the SONET frame
and can end anywhere in that same frame or any subsequent frame.

But this is because Ethernet and other 802 LANs are not meant to provide
an isochronous train of frames, as SONET does. Link layer transfers are
asynchronous in IEEE 802 LANs. So an Ethernet frame is only created when
there's a network layer packet ready to go.

Even so, it still helps, in slower links, to keep the network layer
packets somewhat short, to more quickly detect and recover from
corrupted packets. Also helps in reducing latency in live media streams.
This is true even if the IP or other layer 3 packets are sent over a
slow version of SONET.

Bert
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

Bonjour James,

I agree with you in the case of TCP protocol and lost IP datagrams.

> i.e. If only packet 6 of 10 were lost, the acks would initially show 5,
> then as soon as 6 is sucessfully received, the ack could jump to show all
> up to 10 were received correctly.

Just a point. You can say also "as soon as a segment including the data of
the initial 6 (it will be perhaps initial 6 and 7 and part of 8, in
accordance with the MSS length) is successfully received, the receiver
extracts its missing part and acknowledges up to initial 10". The
retransmitted segment could be with a size different from the primitive one:
the buffer has more data. We suppose not to be limited by the window.

I will also agree with you in the case of TCP protocol and wrong checksum.
You remind me that the TCP checksum indeed covers all the TCP segment added
with IP elements (IP addresses, protocol number, datagram length). And the
case of wrong checksum is similar to the case of lost datagram(s).

But we can have an application that simply uses UDP protocol. The UDP
datagram length is not restricted by the MSS parameter. So we can have e.g.
a UDP payload of 65527 bytes. At the layer 3 the 65535-byte SDU gives 45 IP
fragments. Each IP datagram except the last one has a payload of 1472 bytes
that gives a MAC payload of 1500 bytes.

So, an application block or message of theoretically 65527 bytes can be
carried in 45 successive Ethernet frames. It was that I wanted to say.

Regards,
Michelot
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

"Michelot" <mhostettler@wanadooNOSPAM.fr> wrote:

> But we can have an application that simply uses UDP protocol. The UDP
> datagram length is not restricted by the MSS parameter. So we can have
> e.g.
> a UDP payload of 65527 bytes. At the layer 3 the 65535-byte SDU gives
> 45 IP
> fragments. Each IP datagram except the last one has a payload of 1472
> bytes
> that gives a MAC payload of 1500 bytes.
>
> So, an application block or message of theoretically 65527 bytes can
> be
> carried in 45 successive Ethernet frames. It was that I wanted to say.

Yes, this can be done, but it's not a good idea.

The IP layer will fragment packets that exceed the MTU (maximum
transmission unit) of the link layer. Since TCP figures out its own MTU,
the IP fragmentation feature would normally be applied only to UDP.

But then at the destination side, the fragments have to be recombined to
reform the original layer 3 packet. If one of the fragments is
corrupted, all those other 44 fragments will have to be discarded.

If the transfer is streaming media using UDP/IP, this means that what
might have been a small glitch in the video or audio now becomes a big
glitch.

If the transfer is a periodic series data updates, then you'll lose a
lot of good information when just one of 45 frames was corrupted. A
better idea is to design UDP datagrams that fit within the MTU of the
link layer, and organize the data you want to send in logical sets that
can fit within the MTU. That way, you can still make use of the good
packets received.

Bert
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

Bonsoir Rich,

> If instead I send the data as a single message of 1 million bytes and
> there is an error in the frame (which I agree has the same probability
> of containing an error as the 500 blocks of 2K bytes), then I must
> retransmit the ENTIRE 1 million bytes over again. That's a severe
> penalty for a single bit error.

See please my reply to James. Today, many different applications exist and
they don't segment necessarily in small data blocks. When 2 computers
communicate on the same LAN, UDP using is simpler and quicker than TCP, and
I think the BER is better than 10-9. So, we have one error per 1250 million
bytes, around one error per 850 000 Ethernet frames. On a path of 17 500 km,
the problem would be different.

> I am not sure what you are talking about here. SNAP and LLC are rarely
> used, particularly not in a TCP/IP context. (They are used for things
> like AppleTalk, and NetBIOS/NetBEUI, but these are minor players today,
> relative to TCP/IP.)

You can see RFC1042 (SNAP encapsulation) that obsoletes RFC948 (without
SNAP).

> In the more typica, IP-over-Ethernet scheme, we use Type encapsulation,
> and the IP header appears immediately following the Protocol Type field;
> it does not "float".

Yes also, and it's not the case in ATM, owing to AAL (but we have redundancy
of course).

Regards,
Michelot
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

In article <4194f35a$0$22095$8fcfb975@news.wanadoo.fr>,
"Michelot" <mhostettler@wanadooNOSPAM.fr> wrote:

> > I am not sure what you are talking about here. SNAP and LLC are rarely
> > used, particularly not in a TCP/IP context. (They are used for things
> > like AppleTalk, and NetBIOS/NetBEUI, but these are minor players today,
> > relative to TCP/IP.)
>
> You can see RFC1042 (SNAP encapsulation) that obsoletes RFC948 (without
> SNAP).
>

I am quite familiar with RFC 1042. What I am saying is, on Ethernet, we
don't use it. The standards *allow* it, but in practice it is not done.
RFC 1042 was developed to allow the use of Ethernet-style "type" fields
on systems that do not directly support it (e.g., IEEE 802.5 Token Ring).

"In theory, theory and practice are the same.
In practice, they are not."


--
Rich Seifert Networks and Communications Consulting
21885 Bear Creek Way
(408) 395-5700 Los Gatos, CA 95033
(408) 228-0803 FAX

Send replies to: usenet at richseifert dot com
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

Rich,

> I am quite familiar with RFC 1042. What I am saying is, on Ethernet, we
> don't use it. The standards *allow* it, but in practice it is not done.
> RFC 1042 was developed to allow the use of Ethernet-style "type" fields
> on systems that do not directly support it (e.g., IEEE 802.5 Token Ring).

I'm just going to see 2 old Ethereal files, and effectively, it's of type
fields. Four years ago, I worked on the Alcatel LMDS product (WLL) and I got
used to see LLC/SNAP for the commissioning through an Ethernet link. My
opinion of thinking that it was all over like that was consolidated by this
RFC1042.

> "In theory, theory and practice are the same.
> In practice, they are not."

In we cannot completely trust standards, where are we going...?

Regards,
Michelot
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

Bonsoir Alfred,

> The IP layer will fragment packets that exceed the MTU (maximum
> transmission unit) of the link layer. Since TCP figures out its own MTU,
> the IP fragmentation feature would normally be applied only to UDP.

Ok

> But then at the destination side, the fragments have to be recombined to
> reform the original layer 3 packet. If one of the fragments is
> corrupted, all those other 44 fragments will have to be discarded.

Yes, and we have in mind that this discarding is at the layer 4, because IP
checksum is only for the header. So, you're right.

> If the transfer is streaming media using UDP/IP, this means that what
> might have been a small glitch in the video or audio now becomes a big
> glitch.

Terrible if it occurs during a beautiful girl picture!
We can see that in ATM with AAL-5: when a cell has to be discarded, due to a
queue congestion, the lose is extended to the AAL-5 end frame (PPD
mechanism) or to the complete next AAL-5 frame (EPD).

> If the transfer is a periodic series data updates, then you'll lose a
> lot of good information when just one of 45 frames was corrupted. A
> better idea is to design UDP datagrams that fit within the MTU of the
> link layer, and organize the data you want to send in logical sets that
> can fit within the MTU. That way, you can still make use of the good
> packets received.

Yes, it depends on the application service.
I believe that it is not UDP that fit for the MTU or MSS, but this control
is done by its clients: either the transport sub-layer RTP or directly the
application layer.

Regards,
Michelot
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

There is a common use of LLC/SNAP, but for the ATM encapsulation.

> I am quite familiar with RFC 1042. What I am saying is, on Ethernet, we
> don't use it. The standards *allow* it, but in practice it is not done.
> RFC 1042 was developed to allow the use of Ethernet-style "type" fields
> on systems that do not directly support it (e.g., IEEE 802.5 Token Ring).

With ADSL, some operators here are using LLC/SNAP encapsulation. It is done
by using RFC2684 bridge:

(1) in the LLC encapsulation structure: IP > LLC/SNAP with OUI=0x00-00-00 >
MAC > LLC/SNAP with OUI=0x00-80-C2 > AAL-5
(2) or in the VCC multiplexing structure: without the last LLC/SNAP.

Regards,
Michelot
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

I saw this post disappeared from the thread. So I post it again.

========

Rich,

> I am quite familiar with RFC 1042. What I am saying is, on Ethernet, we
> don't use it. The standards *allow* it, but in practice it is not done.
> RFC 1042 was developed to allow the use of Ethernet-style "type" fields
> on systems that do not directly support it (e.g., IEEE 802.5 Token Ring).

I'm just going to see 2 old Ethereal files, and effectively, it's of type
fields. Four years ago, I worked on the Alcatel LMDS product (WLL) and I got
used to see LLC/SNAP for the commissioning through an Ethernet link. My
opinion of thinking that it was all over like that was consolidated by this
RFC1042.

> "In theory, theory and practice are the same.
> In practice, they are not."

If we cannot completely trust standards, where are we going...?

Regards,
Michelot
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

Michelot wrote:

>
> There is a common use of LLC/SNAP, but for the ATM encapsulation.
>
>> I am quite familiar with RFC 1042. What I am saying is, on Ethernet, we
>> don't use it. The standards *allow* it, but in practice it is not done.
>> RFC 1042 was developed to allow the use of Ethernet-style "type" fields
>> on systems that do not directly support it (e.g., IEEE 802.5 Token Ring).
>
> With ADSL, some operators here are using LLC/SNAP encapsulation. It is
> done by using RFC2684 bridge:

Read what he said, "On Ethernet". ADSL is not Ethernet. You might want to
look up Rich's posting history before you argue with him. While it's
possible for Rich to be wrong, if he tells you something about Ethernet
that is different from what you believe it's a good idea to examine that
belief very carefully before you go on.

> (1) in the LLC encapsulation structure: IP > LLC/SNAP with OUI=0x00-00-00
> > MAC > LLC/SNAP with OUI=0x00-80-C2 > AAL-5
> (2) or in the VCC multiplexing structure: without the last LLC/SNAP.
>
> Regards,
> Michelot

--
--John
Reply to jclarke at ae tee tee global dot net
(was jclarke at eye bee em dot net)
 
G

Guest

Guest
Archived from groups: comp.dcom.lans.ethernet (More info?)

Bonjour James,

> Read what he said, "On Ethernet". ADSL is not Ethernet. You might want
to
> look up Rich's posting history before you argue with him.

Sorry, understand the context: it's ADSL technology and not ADSL physical
layer. And some ADSL technologies carry LLC/MAC frames standardized in IEEE
802.3. Perhaps, you make a confusion, Ethernet is not only the physical
layer like 10Base-T or 100Base-FX. The Ethernet layer 2 can be encapsulated
in other protocols, like ATM or GFP.

Are you convinced, or do you need other developments ?

> While it's
> possible for Rich to be wrong, if he tells you something about Ethernet
> that is different from what you believe

My objective is understanding, helping sometimes the participants, trying to
progress myself and, perhaps, other persons. As you can see, English is not
my native language, I do that with my words. And, sorry for your
misenterpretation, it's not a competitive sport with a winner and a looser.

Thanks for your statement.
Regards,
Michelot
 

mark

Distinguished
Mar 30, 2004
2,613
0
20,780
Archived from groups: comp.dcom.lans.ethernet (More info?)

Rich Seifert <usenet@richseifert.com.invalid> wrote in message news:<usenet-AC25AD.17163211112004@news.isp.giganews.com>...
> In article <c8e879a1.0411110640.6dd76c20@posting.google.com>,
> steven_mark_99@yahoo.com (Mark) wrote:
>
> > I am wondering why was the maximum size of ethernet data payload
> > restricted to 1500 bytes by the standard even though the length field
> > is 2-bytes? (1500 is not even a multiple of 2!).
> >
>
> The 1500 byte payload limit was somewhat arbitrary. *Some* upper limit
> is needed for a number of reasons:
>
> -The longer the maximum frame allowed, the longer the maximum delay on a
> shared medium. All stations must wait for a frame-in-progress to
> complete before attempting their own transmission; longer frames means
> longer wait time.
>
> -Longer frames increases the probability that one or more bits in the
> frame will be received in error, necessitating retransmission of the
> frame. (In the extreme case, an infinitely-long frame is *guaranteed* to
> contain bit errors, ensuring that it would *never* be correctly
> received!)
>
> -A longer maximum frame increases the memory requirement for a NIC using
> a simple, fixed buffer design. This is the *real* reason for the 1500
> byte limit; at the time we designed it (1979), buffer memory was much
> more expensive than it is now, and DMA controllers were too complex to
> be implemented in anything less than a full-custom chip.

I udnerstand the reasons for having to define a maximum size ethernet
frame (as we do for minimum size, which actually doesnt make sense
now, when Ethernet is no more CSMA/CD). But I was curious to know why
1500 ? Well it was arbitrary then probably thats it!

thanks
Mark