detecting end/length of Ethernet II frame?

Archived from groups: comp.dcom.lans.ethernet (More info?)

Hello,

How is the end of an Ethernet II frame detected?
The frame has no length field in it, and I'm
wondering how its end is detected? Does the
ethernet adapter detect when voltage transitions
stop? Is there an idle state for the differential
transmitter which is then detected? Something simpler
than that even? The specification appears to include
an interframe gap of 96 bit times -- is part of
the reason for this gap to allow the ethernet adapter
to detect the end of the frame?

Thanks very much!

Jim
jpartan [at] gmail.com
6 answers Last reply
More about detecting length ethernet frame
  1. Archived from groups: comp.dcom.lans.ethernet (More info?)

    Jim Partan wrote:

    > How is the end of an Ethernet II frame detected?
    > The frame has no length field in it, and I'm
    > wondering how its end is detected? Does the
    > ethernet adapter detect when voltage transitions
    > stop? Is there an idle state for the differential
    > transmitter which is then detected? Something simpler
    > than that even? The specification appears to include
    > an interframe gap of 96 bit times -- is part of
    > the reason for this gap to allow the ethernet adapter
    > to detect the end of the frame?
    >

    The transmitter simply stops, at the end of the frame. The gap allows the
    receivers to recognize that the transmitter has stopped.
  2. Archived from groups: comp.dcom.lans.ethernet (More info?)

    In article <683821bb.0504281003.18e300cc@posting.google.com>,
    jpartan@gmail.com (Jim Partan) wrote:

    > Hello,
    >
    > How is the end of an Ethernet II frame detected?
    > The frame has no length field in it, and I'm
    > wondering how its end is detected? Does the
    > ethernet adapter detect when voltage transitions
    > stop? Is there an idle state for the differential
    > transmitter which is then detected? Something simpler
    > than that even? The specification appears to include
    > an interframe gap of 96 bit times -- is part of
    > the reason for this gap to allow the ethernet adapter
    > to detect the end of the frame?
    >
    > Thanks very much!
    >

    The encoding is such that there *must* be a low-to-high or high-to-low
    transition in the middle of each bit period. This is how the system
    differentiates between 0 and 1 bits. The receiver clocks in a bit
    whenever a transition occurs. After the transmitter has sent its last
    bit, the line returns to its idle state, there are no more state
    transitions and hence the receiver clocks no more bits into its input
    buffer. The interframe gap is to allow other stations a chance to
    access the network: without it, a station could send back-to-back
    packets indefinitely, and no-one else would get a look-in....

    -P.

    --
    Peter Saward
    Centre for Applied Research in Education
    School of Education & Lifelong Learning
    University of East Anglia, Norwich NR4 7TJ, UK.
  3. Archived from groups: comp.dcom.lans.ethernet (More info?)

    "Peter Saward" <p.saward@uea.ac.uk> wrote in message
    news:p.saward-12755C.14012806052005@cpca14.uea.ac.uk...
    > In article <683821bb.0504281003.18e300cc@posting.google.com>,
    > jpartan@gmail.com (Jim Partan) wrote:
    >
    > > Hello,
    > >
    > > How is the end of an Ethernet II frame detected?
    > > The frame has no length field in it, and I'm
    > > wondering how its end is detected? Does the
    > > ethernet adapter detect when voltage transitions
    > > stop? Is there an idle state for the differential
    > > transmitter which is then detected? Something simpler
    > > than that even? The specification appears to include
    > > an interframe gap of 96 bit times -- is part of
    > > the reason for this gap to allow the ethernet adapter
    > > to detect the end of the frame?
    > >
    > > Thanks very much!
    > >
    >
    > The encoding is such that there *must* be a low-to-high or high-to-low
    > transition in the middle of each bit period. This is how the system
    > differentiates between 0 and 1 bits. The receiver clocks in a bit
    > whenever a transition occurs.

    this cant be the complete storey - more complex encodings are used on higher
    speed links, such gigabit ethernet over UTP.

    After the transmitter has sent its last
    > bit, the line returns to its idle state, there are no more state
    > transitions and hence the receiver clocks no more bits into its input
    > buffer.

    maybe you need to look at this from the perspective of Ethernet "layer 2" -
    the bit transport mechanism has to give an indication of the end of each
    packet - because otherwise the end of a packet doesnt get identified. Peter
    describes the mechanism used on co-ax at 10 Mbps (and probably others).

    The interframe gap is to allow other stations a chance to
    > access the network: without it, a station could send back-to-back
    > packets indefinitely, and no-one else would get a look-in....

    the other thing to remember is that layer 1 delineates the end of any
    packet - the length field isnt used for that purpose, since only some
    packets have that field.
    >
    > -P.
    >
    > --
    > Peter Saward
    > Centre for Applied Research in Education
    > School of Education & Lifelong Learning
    > University of East Anglia, Norwich NR4 7TJ, UK.
    --
    Regards

    Stephen Hope - return address needs fewer xxs
  4. Archived from groups: comp.dcom.lans.ethernet (More info?)

    stephen wrote:

    (someone wrote)

    >>The encoding is such that there *must* be a low-to-high or high-to-low
    >>transition in the middle of each bit period. This is how the system
    >>differentiates between 0 and 1 bits. The receiver clocks in a bit
    >>whenever a transition occurs.

    > this cant be the complete storey - more complex encodings are used
    > on higher speed links, such gigabit ethernet over UTP.

    >>bit, the line returns to its idle state, there are no more state
    >>transitions and hence the receiver clocks no more bits into its input
    >>buffer.

    > maybe you need to look at this from the perspective of Ethernet "layer 2" -
    > the bit transport mechanism has to give an indication of the end of each
    > packet - because otherwise the end of a packet doesnt get identified. Peter
    > describes the mechanism used on co-ax at 10 Mbps (and probably others).

    Well, 10Mbps is all that Ethernet II allowed.

    It includes the preamble so that clock recovery logic (such
    as PLLs) have time to synchronize to the data stream.

    Yes, later standards use synchronous data streams which identify
    the end of the packet in a different way, but that isn't Ethernet II.

    -- glen
  5. Archived from groups: comp.dcom.lans.ethernet (More info?)

    Jim Partan wrote:
    > Hello,
    >
    > How is the end of an Ethernet II frame detected?
    > The frame has no length field in it, and I'm
    > wondering how its end is detected? Does the
    > ethernet adapter detect when voltage transitions
    > stop? Is there an idle state for the differential
    > transmitter which is then detected? Something simpler
    > than that even? The specification appears to include
    > an interframe gap of 96 bit times -- is part of
    > the reason for this gap to allow the ethernet adapter
    > to detect the end of the frame?
    >
    > Thanks very much!
    >
    > Jim
    > jpartan [at] gmail.com

    Doesn't the voltage on the cable drop to totally 0 - (no carrier, no
    nothing) when the frame is done? At least for ethernet II? Somewhere
    along the line I got it in my head that is how the end of frame is
    signalled in 10mbps ethernet
  6. Quote:
    Archived from groups: comp.dcom.lans.ethernet (More info?)


    Yes, later standards use synchronous data streams which identify
    the end of the packet in a different way, but that isn't Ethernet II.

    -- glen


    What way in Fast Ethernet?
Ask a new question

Read More

Ethernet Card Gmail Networking