PCI-Express over Cat6

Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

http://www.extremetech.com/article2/0,1558,1585024,00.asp?kc=ETRSS02129TX1K0000532

Yousuf Khan

--
Humans: contact me at ykhan at rogers dot com
Spambots: just reply to this email address ;-)
61 answers Last reply
More about express cat6
  1. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

    "Yousuf Khan" <news.tally.bbbl67@spamgourmet.com> wrote in message
    news:0m8mc.409548$2oI1.297846@twister01.bloor.is.net.cable.rogers.com...
    >
    http://www.extremetech.com/article2/0,1558,1585024,00.asp?kc=ETRSS02129TX1K0000532
    >
    > Yousuf Khan
    >

    Nice... too bad all of our cabling is Cat5.
  2. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

    On Wed, 05 May 2004 16:10:36 GMT, "Yousuf Khan"
    <news.tally.bbbl67@spamgourmet.com> wrote:

    >http://www.extremetech.com/article2/0,1558,1585024,00.asp?kc=ETRSS02129TX1K0000532
    >
    > Yousuf Khan

    <yawn>

    While the world pushes tighter integration, who does TI think is going to pile
    on to a proprietary way to split a system into chunks?

    I love the bit about remoting HID devices.
    Yeah, there's a high-throughput market to exploit...

    /daytripper (everything dumb is new again in Texas ;-)
  3. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

    Judd wrote:
    > "Yousuf Khan" <news.tally.bbbl67@spamgourmet.com> wrote in message
    >>
    >
    http://www.extremetech.com/article2/0,1558,1585024,00.asp?kc=ETRSS02129TX1K0000532
    >
    > Nice... too bad all of our cabling is Cat5.

    Well, I'm sure the motherboard makers will provide you with some Cat6 to
    connect your PCI-E devices remotely with. :-)

    Yousuf Khan
  4. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

    daytripper wrote:
    > On Wed, 05 May 2004 16:10:36 GMT, "Yousuf Khan"
    >
    >>
    http://www.extremetech.com/article2/0,1558,1585024,00.asp?kc=ETRSS02129TX1K0000532
    >
    > <yawn>
    >
    > I love the bit about remoting HID devices.
    > Yeah, there's a high-throughput market to exploit...

    Maybe it's for the really, really, really fast typers? :-)

    I guess in their haste to get a press release out they forgot that this sort
    of job is already done by USB?

    I'm sure they have much more important ideas in mind behind it, but none of
    which really excite nor matter to typical home users. Things like clustering
    interconnects or remote storage devices.

    > While the world pushes tighter integration, who does TI think is
    > going to pile on to a proprietary way to split a system into chunks?

    As I said, maybe they have some really big ideas, they just weren't smart
    enough to make it sound exciting on a press release. :-)

    Yousuf Khan
  5. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

    X-No-Archive: yes


    "Yousuf Khan" <news.20.bbbl67@spamgourmet.com> wrote in message
    news:jjkmc.415829$2oI1.158440@twister01.bloor.is.net.cable.rogers.com...
    > daytripper wrote:
    > > On Wed, 05 May 2004 16:10:36 GMT, "Yousuf Khan"
    > >
    > >>
    >
    http://www.extremetech.com/article2/0,1558,1585024,00.asp?kc=ETRSS02129TX1K0000532
    > >
    > > <yawn>
    > >
    > > I love the bit about remoting HID devices.
    > > Yeah, there's a high-throughput market to exploit...
    >
    > Maybe it's for the really, really, really fast typers? :-)
    >
    > I guess in their haste to get a press release out they forgot that this
    sort
    > of job is already done by USB?
    >
    > I'm sure they have much more important ideas in mind behind it, but none
    of
    > which really excite nor matter to typical home users. Things like
    clustering
    > interconnects or remote storage devices.
    >
    > > While the world pushes tighter integration, who does TI think is
    > > going to pile on to a proprietary way to split a system into chunks?
    >
    > As I said, maybe they have some really big ideas, they just weren't smart
    > enough to make it sound exciting on a press release. :-)
    >
    > Yousuf Khan
    >
    >

    Maybe we could one day get little modules with just the CPU and Gig-Ethernet
    port to Add extra processing power.
  6. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    daytripper wrote:
    > On Wed, 05 May 2004 16:10:36 GMT, "Yousuf Khan"
    > <news.tally.bbbl67@spamgourmet.com> wrote:
    >
    >
    >>http://www.extremetech.com/article2/0,1558,1585024,00.asp?kc=ETRSS02129TX1K0000532
    >>
    >
    >
    > <yawn>
    >
    > While the world pushes tighter integration, who does TI think is going to pile
    > on to a proprietary way to split a system into chunks?
    >

    What can be used to take apart can also be used to put together. What
    TI has done seems like some version of I/O that Intel was pushing...only
    it's not Intel silicon, just like Infiniband isn't Intel silicon. How
    will Intel react to this one: cut loose PCI-Express?

    I've crossposted to comp.arch to see if I can't attract comments about
    how real this is and what effects if might have outside the Intel/PC
    marketplace.

    RM
  7. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    "Robert Myers" <rmyers@rustuck.com> wrote in message
    news:RIrmc.32533$_41.2657354@attbi_s02...
    > daytripper wrote:
    > > On Wed, 05 May 2004 16:10:36 GMT, "Yousuf Khan"
    > > <news.tally.bbbl67@spamgourmet.com> wrote:
    > >
    > >
    >
    >>http://www.extremetech.com/article2/0,1558,1585024,00.asp?kc=ETRSS02129TX1
    K0000532
    > >>
    > >
    > >
    > > <yawn>
    > >
    > > While the world pushes tighter integration, who does TI think is going
    to pile
    > > on to a proprietary way to split a system into chunks?
    > >
    >
    > What can be used to take apart can also be used to put together. What
    > TI has done seems like some version of I/O that Intel was pushing...only
    > it's not Intel silicon, just like Infiniband isn't Intel silicon. How
    > will Intel react to this one: cut loose PCI-Express?
    >
    > I've crossposted to comp.arch to see if I can't attract comments about
    > how real this is and what effects if might have outside the Intel/PC
    > marketplace.
    >
    > RM

    They are sending a 1X pci express over 4 pairs of CAT6 which is better than
    CAT5 which 1000baseT uses.
    They don't say how long the cable is. Ethernet uses 50 to 100 meters. 2-5
    meters is a lot easier.
    PCI-express is 2.5 Gb/s on the wire, GigE is 1.250 on the wire,
    PCI express is working on cabling extensions. Intel is big on it. Why
    would this make them upset?
    1X PCI express is equivilent, roughly, to the 66MHz 32 bit PCI slot. Or
    maybe to a 66 by 64 due to being duplex.
    Many folks don't like to open the box to add stuff to their computer. This
    is an alternative to things like firewire and USB2 as a way to add stuff.

    del cecchi
    >
  8. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

    "Yousuf Khan" <news.20.bbbl67@spamgourmet.com> wrote in message
    news:U9kmc.415731$2oI1.408879@twister01.bloor.is.net.cable.rogers.com...
    > Judd wrote:
    > > "Yousuf Khan" <news.tally.bbbl67@spamgourmet.com> wrote in message
    > >>
    > >
    >
    http://www.extremetech.com/article2/0,1558,1585024,00.asp?kc=ETRSS02129TX1K0000532
    > >
    > > Nice... too bad all of our cabling is Cat5.
    >
    > Well, I'm sure the motherboard makers will provide you with some Cat6 to
    > connect your PCI-E devices remotely with. :-)
    >

    I'm thinking office... not so much motherboard. It's application could be
    far reaching from an office standpoint, but cabling would need to be
    upgraded infrastructure-wise.
  9. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    Distributed systems, distributed redundant systems, or perhaps there
    are some really hot noisy graphics cards planned that cook in BTX
    environment and thus need to be relocated to a different... building :-)

    USB is a nice idea - but it's implementation seems somewhat variable,
    with reliability issues from chipsets to firmware. HDs can vanish on you,
    scanners can stop working, printers can sometimes refuse to be seen.
    Self power seems particularly marginal with blown or pico fuse resets.

    Latency could be interesting tho - Myrinet isn't exactly cheap.

    IT industry seems to be creating a lot of Beta v VHS right now.
    --
    Dorothy Bradbury
    www.stores.ebay.co.uk/panaflofan for fans, books & other items
    http://homepage.ntlworld.com/dorothy.bradbury/panaflo.htm (Direct)
  10. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    "Dorothy Bradbury" <dorothy.bradbury@ntlworld.com> wrote in message
    news:mozmc.195$wA1.29@newsfe2-gui.server.ntli.net...
    > Distributed systems, distributed redundant systems, or perhaps there
    > are some really hot noisy graphics cards planned that cook in BTX
    > environment and thus need to be relocated to a different... building
    :-)
    >
    > USB is a nice idea - but it's implementation seems somewhat variable,
    > with reliability issues from chipsets to firmware. HDs can vanish on
    you,
    > scanners can stop working, printers can sometimes refuse to be seen.
    > Self power seems particularly marginal with blown or pico fuse resets.
    >
    > Latency could be interesting tho - Myrinet isn't exactly cheap.
    >
    > IT industry seems to be creating a lot of Beta v VHS right now.
    > --
    > Dorothy Bradbury
    > www.stores.ebay.co.uk/panaflofan for fans, books & other items
    > http://homepage.ntlworld.com/dorothy.bradbury/panaflo.htm (Direct)
    >
    Unless they pretty radically change (extend) the pci-express physical
    layer and probably some stuff about the architecture across the room is
    about what you can hope for. And the room better not be too big.

    del cecchi

    PS Implementations are always variable, unless there is only one.
  11. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    On Thu, 6 May 2004 20:57:07 -0500, "del cecchi" <dcecchi.nojunk@att.net>
    wrote:

    >
    >"Dorothy Bradbury" <dorothy.bradbury@ntlworld.com> wrote in message
    >news:mozmc.195$wA1.29@newsfe2-gui.server.ntli.net...
    >> Distributed systems, distributed redundant systems, or perhaps there
    >> are some really hot noisy graphics cards planned that cook in BTX
    >> environment and thus need to be relocated to a different... building
    >:-)
    >>
    >> USB is a nice idea - but it's implementation seems somewhat variable,
    >> with reliability issues from chipsets to firmware. HDs can vanish on
    >you,
    >> scanners can stop working, printers can sometimes refuse to be seen.
    >> Self power seems particularly marginal with blown or pico fuse resets.
    >>
    >> Latency could be interesting tho - Myrinet isn't exactly cheap.
    >>
    >> IT industry seems to be creating a lot of Beta v VHS right now.
    >> --
    >> Dorothy Bradbury
    >> www.stores.ebay.co.uk/panaflofan for fans, books & other items
    >> http://homepage.ntlworld.com/dorothy.bradbury/panaflo.htm (Direct)
    >>
    >Unless they pretty radically change (extend) the pci-express physical
    >layer and probably some stuff about the architecture across the room is
    >about what you can hope for. And the room better not be too big.

    Well, hell, even I will give them more credit than that. There's no real need
    to change the PCI Express architecture to do what TI's (probably) doing, just
    send an n-bit wide link to a bridge device and you're good to go nuts bolting
    on devices until you've squeezed that link to the last bps.

    Physical layer changes are likely quite modest - just enough to get them a
    patent of some kind (the article did imply it was somehow proprietary). otoh,
    "proprietary" is unlikely to fly far as an io interconnect. Nobody likes
    paying tribute, and afaict there's no obvious need to stray from the
    soon-to-be-well-trod path (PCI-X Mode 2 is an utter non-starter now - Intel is
    likely going to quietly let it die without ever selling a product with it -
    sending the hoards directly to PCI Express) to build rather large systems full
    of IO devices.

    As for using TI's little scheme for desktop/HID devices instead of USB: it is
    to laugh. USB 2.0 fast mode is way overkill for HIDs as it is, it's open and
    cheap to implement, brings (modest) power to the devices (not mentioned in
    this Cat6 scheme) and from a fair size (but admittedly not huge) sample of
    diverse USB 1 & 2 devices in our labs, appears quite mature (finally, yes ;-)

    >PS Implementations are always variable, unless there is only one.

    lol

    Still, not quite as humorous as using "lower latency" in the same sentence
    with "HID devices"...

    /daytripper
  12. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    Dorothy Bradbury wrote:
    > Distributed systems, distributed redundant systems, or perhaps there
    > are some really hot noisy graphics cards planned that cook in BTX
    > environment and thus need to be relocated to a different... building
    > :-)

    I can also see fairly interesting home use for PCI-E over Cat6: home theatre
    applications. Sending the Dolby/DTS sound *and* the HDTV video over the same
    wire basically. :-)

    > USB is a nice idea - but it's implementation seems somewhat variable,
    > with reliability issues from chipsets to firmware. HDs can vanish on
    > you, scanners can stop working, printers can sometimes refuse to be
    > seen. Self power seems particularly marginal with blown or pico fuse
    > resets.

    But more than good enough for an HID interface.

    > IT industry seems to be creating a lot of Beta v VHS right now.

    Intel was even bellowing about trying to combine USB and WiFi together to
    form Wireless USB which it expects will take on Bluetooth, except be faster
    and work over greater distances.

    Yousuf Khan
  13. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

    "Judd" <IhateSpam@stopspam.com> wrote in message
    news:109m6p15srctua2@corp.supernews.com...
    > > Well, I'm sure the motherboard makers will provide you with some Cat6 to
    > > connect your PCI-E devices remotely with. :-)
    > >
    >
    > I'm thinking office... not so much motherboard. It's application could be
    > far reaching from an office standpoint, but cabling would need to be
    > upgraded infrastructure-wise.

    I seriously doubt that despite the fact that it's a Cat6 wire, it will not
    likely go the distances that you typically can take an Ethernet out to. It's
    likely only using the cabling of Ethernet without the actual Ethernet
    protocol. They're likely going to limit the distances that the cable can
    travel in this application.

    Yousuf Khan
  14. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

    In comp.sys.ibm.pc.hardware.chips Yousuf Khan <news.tally.bbbl67@spamgourmet.com> wrote:
    > I seriously doubt that despite the fact that it's a Cat6 wire, it will not
    > likely go the distances that you typically can take an Ethernet out to.

    Bingo! Ethernet twisted-pair wire isn't so special, its more
    the balanced signalling used. Signal+ paired with Signal-.

    Ethernet 100baseTX was running 100 MHz across 100m of wild
    country at a time when motherborad designers had trouble with
    running 50 MHz across 20 cm of multi-layer PCB. But mobo
    signals aren't balanced and that gives all sorts of problems.
    Balancing the signals would double the AC pincount.

    AFAIK, no PCI varient uses balanced signalling, so really
    won't benefit from Cat6. IIRC, there was a oddball SCSI
    that did used balanced signals.

    -- Robert
  15. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    Robert Myers wrote:

    > I dunno. I know even less about expansion bus protocols than I do about
    > most other things. Is there anything you can do with any available
    > out-of-the box interconnect that you can't do with lower latency using
    > PCI-Express? Limited bandwidth and distance, to be sure, but how could
    > you beat the latency?

    Most of the end-to-end latency these days comes from the PCI and the
    link (SerDes + distance). So PCI-Express-only would not save much. As
    Del noted, Intel is working on a switching extensions to PCI-Express,
    but the PCI-Express protocol is not really designed for that: flow
    control is very tight, as you would expect on a very short
    point-to-point connection. If PCI-Express wants to go outside the box,
    it will have to deal with some tough problems with flow control.

    Patrick
  16. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

    > AFAIK, no PCI varient uses balanced signalling, so really
    > won't benefit from Cat6. IIRC, there was a oddball SCSI
    > that did used balanced signals.

    HVD did I think, which could run very long distances.

    Characteristics of Cat5/6 cable soon change if you abuse it,
    so distance aside I don't think it's for typical office environments.
    --
    Dorothy Bradbury
  17. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

    <127.0.0.1@127.0.0.1> wrote in message news:2fupk1F2fhrkU1@uni-berlin.de...
    > X-No-Archive: yes
    >
    >
    > "Yousuf Khan" <news.20.bbbl67@spamgourmet.com> wrote in message
    > news:jjkmc.415829$2oI1.158440@twister01.bloor.is.net.cable.rogers.com...
    > > daytripper wrote:
    > > > On Wed, 05 May 2004 16:10:36 GMT, "Yousuf Khan"
    > > >
    > > >>
    > >
    >
    http://www.extremetech.com/article2/0,1558,1585024,00.asp?kc=ETRSS02129TX1K0000532
    > > >
    > > > <yawn>
    > > >
    > > > I love the bit about remoting HID devices.
    > > > Yeah, there's a high-throughput market to exploit...
    > >
    > > Maybe it's for the really, really, really fast typers? :-)
    > >
    > > I guess in their haste to get a press release out they forgot that this
    > sort
    > > of job is already done by USB?
    > >
    > > I'm sure they have much more important ideas in mind behind it, but none
    > of
    > > which really excite nor matter to typical home users. Things like
    > clustering
    > > interconnects or remote storage devices.
    > >
    > > > While the world pushes tighter integration, who does TI think is
    > > > going to pile on to a proprietary way to split a system into chunks?
    > >
    > > As I said, maybe they have some really big ideas, they just weren't
    smart
    > > enough to make it sound exciting on a press release. :-)
    > >
    > > Yousuf Khan
    > >
    > >
    >
    > Maybe we could one day get little modules with just the CPU and
    Gig-Ethernet
    > port to Add extra processing power.
    >
    >

    Maybe something like this:
    http://www.adlogic-pc104.com/products/cpu/pc104/datasheets/msm855.pdf
  18. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    On Fri, 07 May 2004 13:29:51 -0400, Patrick Geoffray <patrick@myri.com> wrote:

    >Robert Myers wrote:
    >
    >> I dunno. I know even less about expansion bus protocols than I do about
    >> most other things. Is there anything you can do with any available
    >> out-of-the box interconnect that you can't do with lower latency using
    >> PCI-Express? Limited bandwidth and distance, to be sure, but how could
    >> you beat the latency?
    >
    >Most of the end-to-end latency these days comes from the PCI and the
    >link (SerDes + distance). So PCI-Express-only would not save much. As
    >Del noted, Intel is working on a switching extensions to PCI-Express,
    >but the PCI-Express protocol is not really designed for that: flow
    >control is very tight, as you would expect on a very short
    >point-to-point connection. If PCI-Express wants to go outside the box,
    >it will have to deal with some tough problems with flow control.

    Respectfully, I disagree with that last sentence. Unless the mission is
    redefined, PCI Express can certainly go outside the crate.

    /daytripper (the question remains why one would do that...)
  19. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    In comp.arch Del Cecchi <cecchinospam@us.ibm.com> wrote:
    >
    > IB and PCI-Express should be pretty comparable. (PCI express isn't out of
    > the box yet)
    > Ethernet with RDMA and hardware offload is in the same ballpark.
    > Rapid I/O, Fibre Channel, are contenders depending on task.
    >
    > Is latency a big deal writing to a disk or graphics card?
    >

    It can easily be for a graphics card.

    --
    Sander

    +++ Out of cheese error +++
  20. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    Patrick Geoffray <patrick@myri.com> wrote:
    +---------------
    | Most of the end-to-end latency these days comes from the PCI and the
    | link (SerDes + distance). So PCI-Express-only would not save much. As
    | Del noted, Intel is working on a switching extensions to PCI-Express,
    | but the PCI-Express protocol is not really designed for that: flow
    | control is very tight, as you would expect on a very short
    | point-to-point connection. If PCI-Express wants to go outside the box,
    | it will have to deal with some tough problems with flow control.
    +---------------

    Indeed. Quite a bit of the difference between "GSN" (a.k.a. the ANSI
    HIPPI-6400 standard) and the SGI "XIO" (switched-fabric I/O to multiple
    PCI busses) it was based on was the need to increase the low-level
    retransmisson buffers and sequence space to allow a potential 1km
    distance at full bandwidth[1], compared to the ~10m permitted by XIO.
    This added considerably to the die area of the PHY/PMD part.

    Also note that at "only" 10m range, XIO devices *already* needed
    rather large retransmisson buffers and sequence space...


    -Rob

    [1] Though note that un-repeatered GSN can still only go 30m in copper.
    This is for electrical reasons, not flow-control window size.

    -----
    Rob Warnock <rpw3@rpw3.org>
    627 26th Avenue <URL:http://rpw3.org/>
    San Mateo, CA 94403 (650)572-2607
  21. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    On Fri, 07 May 2004 21:18:52 +0000, Sander Vesik wrote:

    > In comp.arch Del Cecchi <cecchinospam@us.ibm.com> wrote:
    >>
    >> IB and PCI-Express should be pretty comparable. (PCI express isn't out of
    >> the box yet)
    >> Ethernet with RDMA and hardware offload is in the same ballpark.
    >> Rapid I/O, Fibre Channel, are contenders depending on task.
    >>
    >> Is latency a big deal writing to a disk or graphics card?
    >>
    >
    > It can easily be for a graphics card.

    Why? Aren't they write-only devices? Surely any latency
    limitation is the 10ms or so of human perception, and anything in the
    circuitry is neither here nor there at that scale.

    --
    Andrew
  22. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    Andrew Reilly wrote:
    > On Fri, 07 May 2004 21:18:52 +0000, Sander Vesik wrote:
    >
    >
    >>In comp.arch Del Cecchi <cecchinospam@us.ibm.com> wrote:
    >>
    >>>IB and PCI-Express should be pretty comparable. (PCI express isn't out of
    >>>the box yet)
    >>>Ethernet with RDMA and hardware offload is in the same ballpark.
    >>>Rapid I/O, Fibre Channel, are contenders depending on task.
    >>>
    >>>Is latency a big deal writing to a disk or graphics card?
    >>>
    >>
    >>It can easily be for a graphics card.
    >
    >
    > Why? Aren't they write-only devices? Surely any latency

    Off the top of my head, at least two requirements exist, namely
    screenshots and flyback sychronisation...

    Cheers,
    Rupert
  23. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    On Sun, 09 May 2004 03:23:59 +0100, Rupert Pigott wrote:

    > Andrew Reilly wrote:
    >> On Fri, 07 May 2004 21:18:52 +0000, Sander Vesik wrote:
    >>
    >>
    >>>In comp.arch Del Cecchi <cecchinospam@us.ibm.com> wrote:
    >>>>Is latency a big deal writing to a disk or graphics card?
    >>>>
    >>>
    >>>It can easily be for a graphics card.
    >>
    >>
    >> Why? Aren't they write-only devices? Surely any latency
    >
    > Off the top of my head, at least two requirements exist, namely
    > screenshots and flyback sychronisation...

    Both of which appear, on the surface, to be frame-rate type events: i.e.,
    in the ballpark of the 10ms event time that I mentioned in the part that
    you snipped. Not a latency issue on the order of memory access or
    processor cycle times...

    [Don't graphics cards generate interrupts for flyback synchronization?]

    --
    Andrew
  24. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    Andrew Reilly wrote:
    > On Sun, 09 May 2004 03:23:59 +0100, Rupert Pigott wrote:
    >
    >
    >>Andrew Reilly wrote:
    >>
    >>>On Fri, 07 May 2004 21:18:52 +0000, Sander Vesik wrote:
    >>>
    >>>
    >>>
    >>>>In comp.arch Del Cecchi <cecchinospam@us.ibm.com> wrote:
    >>>>
    >>>>>Is latency a big deal writing to a disk or graphics card?
    >>>>>
    >>>>
    >>>>It can easily be for a graphics card.
    >>>
    >>>
    >>>Why? Aren't they write-only devices? Surely any latency
    >>
    >>Off the top of my head, at least two requirements exist, namely
    >>screenshots and flyback sychronisation...
    >
    >
    > Both of which appear, on the surface, to be frame-rate type events: i.e.,
    > in the ballpark of the 10ms event time that I mentioned in the part that
    > you snipped. Not a latency issue on the order of memory access or

    Hmmm, how about querying the state of an OpenGL rendering pipeline
    that happens to be sitting on the graphics card ? I don't think that
    it's ever been true to say GFX cards are write only, and I'm not sure
    I'd ever want that. :)

    Cheers,
    Rupert
  25. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    In article <1084204646.576285@teapot.planet.gong>, roo@try-
    removing-this.darkboong.demon.co.uk says...
    > Andrew Reilly wrote:
    > > On Sun, 09 May 2004 03:23:59 +0100, Rupert Pigott wrote:
    > >
    > >
    > >>Andrew Reilly wrote:
    > >>
    > >>>On Fri, 07 May 2004 21:18:52 +0000, Sander Vesik wrote:
    > >>>
    > >>>
    > >>>
    > >>>>In comp.arch Del Cecchi <cecchinospam@us.ibm.com> wrote:
    > >>>>
    > >>>>>Is latency a big deal writing to a disk or graphics card?
    > >>>>>
    > >>>>
    > >>>>It can easily be for a graphics card.
    > >>>
    > >>>
    > >>>Why? Aren't they write-only devices? Surely any latency
    > >>
    > >>Off the top of my head, at least two requirements exist, namely
    > >>screenshots and flyback sychronisation...
    > >
    > >
    > > Both of which appear, on the surface, to be frame-rate type events: i.e.,
    > > in the ballpark of the 10ms event time that I mentioned in the part that
    > > you snipped. Not a latency issue on the order of memory access or
    >
    > Hmmm, how about querying the state of an OpenGL rendering pipeline
    > that happens to be sitting on the graphics card ? I don't think that
    > it's ever been true to say GFX cards are write only, and I'm not sure
    > I'd ever want that. :)

    Why wouldn't things be rendered in memory and then DMA'd to the
    graphics card? Why would the processor *ever* care what's been
    sent to the graphics subsystem? I'm from (close enough to)
    Missouri, and you're going to have to show us, Rupert.

    --
    Keith
  26. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    KR Williams wrote:

    > Why wouldn't things be rendered in memory and then DMA'd to the
    > graphics card?

    Because then the rendering process would be eating system memory
    bandwidth.

    > Why would the processor *ever* care what's been
    > sent to the graphics subsystem?

    Because it may have to make decisions based upon that information. I
    don't know enough about modern graphics hardware to know if it actually does
    this, but it has been at times logical to use the graphics hardware to help
    you make decisions about other issues. For example, a game may want to
    display some information about an object if and only if that object is
    visible to you. That may be the graphics card's decision, since it has to
    decide that anyway.

    DS
  27. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    KR Williams wrote:
    > Ruper Pigott wrote:
    >
    > > Hmmm, how about querying the state of an OpenGL rendering pipeline
    > > that happens to be sitting on the graphics card ? I don't think
    > > that it's ever been true to say GFX cards are write only, and I'm
    > > not sure I'd ever want that. :)
    >
    > Why wouldn't things be rendered in memory and then DMA'd to the
    > graphics card? Why would the processor *ever* care what's been sent
    > to the graphics subsystem?

    Why would you have your main processor(s) render a scene when you have a
    dedicated graphics processor to do it? In the case of the graphics cores
    I've used, reading from the framebuffer is needed to i) make sure the
    FIFO has enough spaces for the register writes you're about to issue,
    and ii) to synchronize the graphics core and host's usage of video
    memory (e.g. so you can reuse a memory buffer once the graphics
    operation that was using it has completed). These wouldn't be too
    difficult to overcome if interconnect latency become a problem, but as
    graphics cards become increasingly flexible there's more useful
    information they can provide to the host. Hardware accelerated collision
    detection for example.

    --
    Wishing you good fortune,
    --Robin Kay-- (komadori)
  28. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    In comp.sys.ibm.pc.hardware.chips Robin KAY <komadori@myrealbox.com> wrote:
    > Why would you have your main processor(s) render a scene
    > when you have a dedicated graphics processor to do it?

    I think you're talking 3-D while Keith is talking 2-D.

    In 3-D there's simply too much drudge work (shading,
    perspective) and not enough interaction back to the control
    program to need or want the CPU. 2-D is much simpler and often
    requires considerable CPU interactivity (CAD) with the display.

    -- Robert
  29. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    KR Williams wrote:
    > In article <1084204646.576285@teapot.planet.gong>, roo@try-
    > removing-this.darkboong.demon.co.uk says...
    >
    >>Andrew Reilly wrote:
    >>
    >>>On Sun, 09 May 2004 03:23:59 +0100, Rupert Pigott wrote:
    >>>
    >>>
    >>>
    >>>>Andrew Reilly wrote:
    >>>>
    >>>>
    >>>>>On Fri, 07 May 2004 21:18:52 +0000, Sander Vesik wrote:
    >>>>>
    >>>>>
    >>>>>
    >>>>>
    >>>>>>In comp.arch Del Cecchi <cecchinospam@us.ibm.com> wrote:
    >>>>>>
    >>>>>>
    >>>>>>>Is latency a big deal writing to a disk or graphics card?
    >>>>>>>
    >>>>>>
    >>>>>>It can easily be for a graphics card.
    >>>>>
    >>>>>
    >>>>>Why? Aren't they write-only devices? Surely any latency
    >>>>
    >>>>Off the top of my head, at least two requirements exist, namely
    >>>>screenshots and flyback sychronisation...
    >>>
    >>>
    >>>Both of which appear, on the surface, to be frame-rate type events: i.e.,
    >>>in the ballpark of the 10ms event time that I mentioned in the part that
    >>>you snipped. Not a latency issue on the order of memory access or
    >>
    >>Hmmm, how about querying the state of an OpenGL rendering pipeline
    >>that happens to be sitting on the graphics card ? I don't think that
    >>it's ever been true to say GFX cards are write only, and I'm not sure
    >>I'd ever want that. :)
    >
    >
    > Why wouldn't things be rendered in memory and then DMA'd to the
    > graphics card? Why would the processor *ever* care what's been
    > sent to the graphics subsystem? I'm from (close enough to)
    > Missouri, and you're going to have to show us, Rupert.

    Try starting here :

    http://www.opengl.org

    Take a look at the spec. There are numerous papers on OpenGL
    acceleration hardware too. FWIW I have been quite impressed by
    the OpenGL spec, seems to give a lot of freedom to both the
    application and the hardware.

    For a more generic non-OpenGL biased look at 3D hardware you
    might want to check out the following :

    "Computer Graphics Principles and Practice",
    2nd Edition by Foley/van Dam/Feiner/Hughes,
    published by Addison Wesley.

    Specifically chapter 18 "Advanced Raster Graphics Architecture"
    for a discussion on various (rather nifty) 3D graphics hardware
    and chapter 16 "Illumination and Shading" for a heavy hint as to
    why it's necessary.

    I can also recommend Jim Blinn's articles in IEEE CG&A, last
    time I read them was 1995. The articles I read by Blinn were
    focussed on software rendering using approximations that were
    "good enough" but still allowed him to get his rendering done
    before hell froze over. IIRC Blinn had access to machinary that
    would *still* eat a modern PC for breakfast in terms of FP and
    memory throughput in those apps.

    However I think times are a-changing. Life might well get a lot
    more interesting when CPU designers start looking for new things
    to do because they can't get any decent speed ups on single
    thread execution speed. ;)

    Cheers,
    Rupert
  30. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    In article <tf4oc.7727$ZX2.6238@newssvr24.news.prodigy.com>,
    redelm@ev1.net.invalid says...
    > In comp.sys.ibm.pc.hardware.chips Robin KAY <komadori@myrealbox.com> wrote:
    > > Why would you have your main processor(s) render a scene
    > > when you have a dedicated graphics processor to do it?
    >
    > I think you're talking 3-D while Keith is talking 2-D.

    Nope. 3-D is no different. AGP wuz supposed to make the
    graphics channel two-way so the graphics card could access main
    memory. DO you know anyone that actually does this? PLease!
    With 32MB (or 128MB) on the graphics card, who cares?
    >
    > In 3-D there's simply too much drudge work (shading,
    > perspective) and not enough interaction back to the control
    > program to need or want the CPU. 2-D is much simpler and often
    > requires considerable CPU interactivity (CAD) with the display.

    Sure, so why does the 3-D card want to go back to main memory,
    again? The graphics pipe is amazingly one-directional. ...and
    thus not sensitive to latency, any more than in human terms.

    --
    Keith
  31. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    In article <c7pron$bvf$1@nntp.webmaster.com>,
    davids@webmaster.com says...
    > KR Williams wrote:
    >
    > > Why wouldn't things be rendered in memory and then DMA'd to the
    > > graphics card?
    >
    > Because then the rendering process would be eating system memory
    > bandwidth.

    Nope. YOu're thinking so AGP (no one uses it, or ever has).
    >
    > > Why would the processor *ever* care what's been
    > > sent to the graphics subsystem?
    >
    > Because it may have to make decisions based upon that information. I
    > don't know enough about modern graphics hardware to know if it actually does
    > this, but it has been at times logical to use the graphics hardware to help
    > you make decisions about other issues. For example, a game may want to
    > display some information about an object if and only if that object is
    > visible to you. That may be the graphics card's decision, since it has to
    > decide that anyway.

    Nope. Any feedback is certainly within human response. Low
    latency (in CPU terms) is irrelevant to graphics subsystems.

    --
    Keith
  32. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    In article <1084278577.759843@teapot.planet.gong>, roo@try-
    removing-this.darkboong.demon.co.uk says...
    > KR Williams wrote:
    > > In article <1084204646.576285@teapot.planet.gong>, roo@try-
    > > removing-this.darkboong.demon.co.uk says...
    > >
    > >>Andrew Reilly wrote:
    > >>
    > >>>On Sun, 09 May 2004 03:23:59 +0100, Rupert Pigott wrote:
    > >>>
    > >>>
    > >>>
    > >>>>Andrew Reilly wrote:
    > >>>>
    > >>>>
    > >>>>>On Fri, 07 May 2004 21:18:52 +0000, Sander Vesik wrote:
    > >>>>>
    > >>>>>
    > >>>>>
    > >>>>>
    > >>>>>>In comp.arch Del Cecchi <cecchinospam@us.ibm.com> wrote:
    > >>>>>>
    > >>>>>>
    > >>>>>>>Is latency a big deal writing to a disk or graphics card?
    > >>>>>>>
    > >>>>>>
    > >>>>>>It can easily be for a graphics card.
    > >>>>>
    > >>>>>
    > >>>>>Why? Aren't they write-only devices? Surely any latency
    > >>>>
    > >>>>Off the top of my head, at least two requirements exist, namely
    > >>>>screenshots and flyback sychronisation...
    > >>>
    > >>>
    > >>>Both of which appear, on the surface, to be frame-rate type events: i.e.,
    > >>>in the ballpark of the 10ms event time that I mentioned in the part that
    > >>>you snipped. Not a latency issue on the order of memory access or
    > >>
    > >>Hmmm, how about querying the state of an OpenGL rendering pipeline
    > >>that happens to be sitting on the graphics card ? I don't think that
    > >>it's ever been true to say GFX cards are write only, and I'm not sure
    > >>I'd ever want that. :)
    > >
    > >
    > > Why wouldn't things be rendered in memory and then DMA'd to the
    > > graphics card? Why would the processor *ever* care what's been
    > > sent to the graphics subsystem? I'm from (close enough to)
    > > Missouri, and you're going to have to show us, Rupert.
    >
    > Try starting here :
    >
    > http://www.opengl.org
    >
    > Take a look at the spec. There are numerous papers on OpenGL
    > acceleration hardware too. FWIW I have been quite impressed by
    > the OpenGL spec, seems to give a lot of freedom to both the
    > application and the hardware.
    >
    > For a more generic non-OpenGL biased look at 3D hardware you
    > might want to check out the following :
    >
    > "Computer Graphics Principles and Practice",
    > 2nd Edition by Foley/van Dam/Feiner/Hughes,
    > published by Addison Wesley.
    >
    > Specifically chapter 18 "Advanced Raster Graphics Architecture"
    > for a discussion on various (rather nifty) 3D graphics hardware
    > and chapter 16 "Illumination and Shading" for a heavy hint as to
    > why it's necessary.

    WHy don't you tell us why it's necessary, rather than spewing
    some irrelevant web sites. THe fact is that the graphics channel
    is amazingly unidirectional. THe processor sends the commands to
    the graphics card and it does it's thing in its own memory. AGP
    was a wunnerful idea, ten years before it was available.
    >
    > I can also recommend Jim Blinn's articles in IEEE CG&A, last
    > time I read them was 1995. The articles I read by Blinn were
    > focussed on software rendering using approximations that were
    > "good enough" but still allowed him to get his rendering done
    > before hell froze over. IIRC Blinn had access to machinary that
    > would *still* eat a modern PC for breakfast in terms of FP and
    > memory throughput in those apps.

    Oh, my. 1995, there's a recent article. I don't remember. Did
    graphics cars have 128MB of their own then? Did systems even
    have 128MB. Come on! Get with the times.

    > However I think times are a-changing. Life might well get a lot
    > more interesting when CPU designers start looking for new things
    > to do because they can't get any decent speed ups on single
    > thread execution speed. ;)

    "They" are. ;-) Though you're still wrong about the graphics
    pipe. It really isn't latency sensitive, any more than humans
    are.

    --
    Keith
  33. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    "KR Williams" <krw@att.biz> wrote in message
    news:MPG.1b0cafe0e25be65798987e@news1.news.adelphia.net...

    > In article <c7pron$bvf$1@nntp.webmaster.com>,
    > davids@webmaster.com says...

    >> KR Williams wrote:
    >>
    >> > Why wouldn't things be rendered in memory and then DMA'd to the
    >> > graphics card?
    >>
    >> Because then the rendering process would be eating system memory
    >> bandwidth.

    > Nope. YOu're thinking so AGP (no one uses it, or ever has).

    Okay, then you tell me why things aren't rendered in memory and then
    DMA'd to the graphics card.

    DS
  34. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    In comp.sys.ibm.pc.hardware.chips KR Williams <krw@att.biz> wrote:
    > Nope. 3-D is no different. AGP wuz supposed to make the
    > graphics channel two-way so the graphics card could access main
    > memory. DO you know anyone that actually does this? PLease!
    > With 32MB (or 128MB) on the graphics card, who cares?

    I'm not at all sure what point you're trying to make here.
    Forgive me if I flounder around a bit. The graphics card
    _does_ access main memory. AFAIK, for both 2D & 3D after
    rendering in system RAM the CPU programs the GPU to do BM
    DMA to load the framebuffer vram.

    No-one in their right mind tries to get the CPU to read
    the framebuffer. It is dead slow because vram is very busy
    being read to satisfy the refresh rate. It is hard enough for
    the GPU to access synchonously and this is what the multiple
    planes and the MBs of vram are used for.

    > Sure, so why does the 3-D card want to go back to main memory,
    > again? The graphics pipe is amazingly one-directional. ...and
    > thus not sensitive to latency, any more than in human terms.

    My understanding is that in 3-D the advanced functions in
    the GPU (perspective & shading) can handle quite a number of
    intermediate frames before requiring a reload from system ram.
    But it does require a reload. How's the graphics card gonna
    know what's behind Door Number Three?

    -- Robert

    >
  35. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    "Robert Redelmeier" <redelm@ev1.net.invalid> wrote in message
    news:ihLoc.26799$Rm2.21523@newssvr22.news.prodigy.com...

    > In comp.sys.ibm.pc.hardware.chips KR Williams <krw@att.biz> wrote:

    >> Nope. 3-D is no different. AGP wuz supposed to make the
    >> graphics channel two-way so the graphics card could access main
    >> memory. DO you know anyone that actually does this? PLease!
    >> With 32MB (or 128MB) on the graphics card, who cares?

    > I'm not at all sure what point you're trying to make here.
    > Forgive me if I flounder around a bit. The graphics card
    > _does_ access main memory. AFAIK, for both 2D & 3D after
    > rendering in system RAM the CPU programs the GPU to do BM
    > DMA to load the framebuffer vram.

    Most current graphics cards render in ram on the graphics card.
    Therefore there is no need to DMA the data into the framebuffer, it's as
    simple as changing a pointer for where the framebuffer is located in the
    graphics card's RAM. This is true for all but the very cheapest graphics
    systems today.

    > No-one in their right mind tries to get the CPU to read
    > the framebuffer. It is dead slow because vram is very busy
    > being read to satisfy the refresh rate. It is hard enough for
    > the GPU to access synchonously and this is what the multiple
    > planes and the MBs of vram are used for.

    Right. Typically the CPU doesn't read the texture memory either and the
    textures only cross the system memory or AGP bus once, to get loaded into
    the graphic's card's RAM. From there thay are applied and rendered wholly on
    the graphic's card's internal bus.

    >> Sure, so why does the 3-D card want to go back to main memory,
    >> again? The graphics pipe is amazingly one-directional. ...and
    >> thus not sensitive to latency, any more than in human terms.

    > My understanding is that in 3-D the advanced functions in
    > the GPU (perspective & shading) can handle quite a number of
    > intermediate frames before requiring a reload from system ram.
    > But it does require a reload. How's the graphics card gonna
    > know what's behind Door Number Three?

    That I don't know the answer to. Can the graphics card say, "this item
    is visible, I need more details about it"? I don't think so. I think the
    decision of what might be visible is made by the main processor and it must
    tell the graphics card about every object or that object will not be
    rendered.

    DS
  36. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    KR Williams wrote:

    [SNIP]

    > WHy don't you tell us why it's necessary, rather than spewing
    > some irrelevant web sites. THe fact is that the graphics channel

    OpenGL.org is hardly irrelevent with respect to 3D apps and
    hardware. :/

    > is amazingly unidirectional. THe processor sends the commands to
    > the graphics card and it does it's thing in its own memory. AGP

    No, the fact is : It isn't. I've given you some broad reasons
    and I've given you some hints on where to start finding some
    specifics.

    [SNIP]

    > "They" are. ;-) Though you're still wrong about the graphics
    > pipe. It really isn't latency sensitive, any more than humans
    > are.

    As long as you consider sites like opengl.org to be irrelevant
    you will continue to think that way regardless of what the
    reality is.

    Cheers,
    Rupert
  37. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    In article <c807u2$5ri$1@nntp.webmaster.com>,
    davids@webmaster.com says...
    >
    > "Robert Redelmeier" <redelm@ev1.net.invalid> wrote in message
    > news:ihLoc.26799$Rm2.21523@newssvr22.news.prodigy.com...
    >
    > > In comp.sys.ibm.pc.hardware.chips KR Williams <krw@att.biz> wrote:
    >
    > >> Nope. 3-D is no different. AGP wuz supposed to make the
    > >> graphics channel two-way so the graphics card could access main
    > >> memory. DO you know anyone that actually does this? PLease!
    > >> With 32MB (or 128MB) on the graphics card, who cares?
    >
    > > I'm not at all sure what point you're trying to make here.
    > > Forgive me if I flounder around a bit. The graphics card
    > > _does_ access main memory. AFAIK, for both 2D & 3D after
    > > rendering in system RAM the CPU programs the GPU to do BM
    > > DMA to load the framebuffer vram.
    >
    > Most current graphics cards render in ram on the graphics card.
    > Therefore there is no need to DMA the data into the framebuffer, it's as
    > simple as changing a pointer for where the framebuffer is located in the
    > graphics card's RAM. This is true for all but the very cheapest graphics
    > systems today.

    Exactly. AGP was an idea that was obsolete by the time it was
    implemented. Memory is *cheap*.

    --
    Keith
  38. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    In article <c7veef$n51$1@nntp.webmaster.com>,
    davids@webmaster.com says...
    >
    > "KR Williams" <krw@att.biz> wrote in message
    > news:MPG.1b0cafe0e25be65798987e@news1.news.adelphia.net...
    >
    > > In article <c7pron$bvf$1@nntp.webmaster.com>,
    > > davids@webmaster.com says...
    >
    > >> KR Williams wrote:
    > >>
    > >> > Why wouldn't things be rendered in memory and then DMA'd to the
    > >> > graphics card?
    > >>
    > >> Because then the rendering process would be eating system memory
    > >> bandwidth.
    >
    > > Nope. YOu're thinking so AGP (no one uses it, or ever has).
    >
    > Okay, then you tell me why things aren't rendered in memory and then
    > DMA'd to the graphics card.

    Are you slow? They're "rendered" IN THE GRAPHICS CARD'S MEMORY.
    Sheesh!

    --
    Keith
  39. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    In comp.sys.ibm.pc.hardware.chips KR Williams <krw@att.biz> wrote:
    > Exactly. AGP was an idea that was obsolete by the time it was
    > implemented. Memory is *cheap*.

    OK, so stick the graphics card on PCI and free up that AGP
    for a gigabit adapter. They normally saturate PCI around
    35 MByte/s. Limited burst length prevents achieving the
    theoretical PCI 33/32 throughput of 133 MB/s. Gigabit needs
    125 MB/s each way.

    -- Robert

    >
  40. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    In article <vgWoc.8474$4V7.5557@newssvr24.news.prodigy.com>,
    redelm@ev1.net.invalid says...
    > In comp.sys.ibm.pc.hardware.chips KR Williams <krw@att.biz> wrote:
    > > Exactly. AGP was an idea that was obsolete by the time it was
    > > implemented. Memory is *cheap*.
    >
    > OK, so stick the graphics card on PCI and free up that AGP
    > for a gigabit adapter. They normally saturate PCI around
    > 35 MByte/s. Limited burst length prevents achieving the
    > theoretical PCI 33/32 throughput of 133 MB/s. Gigabit needs
    > 125 MB/s each way.

    To reasons. Marketing: AGP is a tick-box for graphics. PCI is
    anti-tick-box.

    Why even bother? Put the GBE on the HT link (other side of the
    bridge)! PCI is just sooo, 90s! ;-)


    --
    Keith
  41. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    "KR Williams" <krw@att.biz> wrote in message
    news:MPG.1b0df998e8b661c7989886@news1.news.adelphia.net...
    > In article <c7veef$n51$1@nntp.webmaster.com>,
    > davids@webmaster.com says...
    >>
    >> "KR Williams" <krw@att.biz> wrote in message
    >> news:MPG.1b0cafe0e25be65798987e@news1.news.adelphia.net...
    >>
    >> > In article <c7pron$bvf$1@nntp.webmaster.com>,
    >> > davids@webmaster.com says...
    >>
    >> >> KR Williams wrote:
    >> >>
    >> >> > Why wouldn't things be rendered in memory and then DMA'd to the
    >> >> > graphics card?
    >> >>
    >> >> Because then the rendering process would be eating system memory
    >> >> bandwidth.
    >>
    >> > Nope. YOu're thinking so AGP (no one uses it, or ever has).
    >>
    >> Okay, then you tell me why things aren't rendered in memory and then
    >> DMA'd to the graphics card.
    >
    > Are you slow? They're "rendered" IN THE GRAPHICS CARD'S MEMORY.
    > Sheesh!

    Yes, but *WHY*? Do you have a reading comprehension problem?

    Let's start over. I was answering the question "Why wouldn't things be
    rendered in memory and then DMA'd to the graphics card?". My answer was
    "Because then the rendering process would be eating system mrmory
    bandwidth". You said "Nope. You're thinking of AGP." So I said, "Okay, then
    you tell me why things aren't rendered in memory and then DMA'd to the
    graphics card".

    So, if the answer "because then the rendering process would be eating
    system memory bandwidth" is wrong, then please tell me *WHY* are the
    rendered in the graphics card's memory? Why even have memory on the graphics
    card at all?

    Could it be because then the rendering process would be eating system
    memory bandwidth? Just like I've been saying all along?!

    DS
  42. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    On Thu, 13 May 2004 02:17:12 -0700, "David Schwartz"
    <davids@webmaster.com> wrote:

    >> Nope. YOu're thinking so AGP (no one uses it, or ever has).
    >
    > Okay, then you tell me why things aren't rendered in memory and then
    >DMA'd to the graphics card.

    Erm, I'm no expert on graphics card... seeing that I have no need for
    the latest & greatest. But reading the usual webzines/sites on new
    stuff generally gives me the idea that the processor nowadays handles
    setting up each scene as objects in a 3D space and then shoots these
    to the GPU. The GPU then figure out how to put textures and other
    effects on the objects and render the scene in local buffer. Then it
    displays out.

    Used to be the CPU has to do a lot of these stuff, but there came
    along 3D GPU which started with basic stuff, then goes on to do
    Transform & Lighting effects, then pixel shading and stuff (latest in
    thing seems to be Pixel Shader 3.0)

    Which I think makes much more sense than rendering the whole scene by
    the CPU, then storing it in main memory before shooting a chunk of
    some 24Mbits of data per frame, for some erm 720Mbps across the
    AGP/PCI bus to maintain a half decent 30FPS at 1024x768x32? Or doesn't
    it?

    Of course, being the village idiot in CSIPHC, I could be talking about
    the wrong stuff in the wrong places altogether :PpPpPp

    --
    L.Angel: I'm looking for web design work.
    If you need basic to med complexity webpages at affordable rates, email me :)
    Standard HTML, SHTML, MySQL + PHP or ASP, Javascript.
    If you really want, FrontPage & DreamWeaver too.
    But keep in mind you pay extra bandwidth for their bloated code
  43. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    KR Williams wrote:

    > Instead of telling people how smart you are, why don't you tell
    > me what, in the graphics pipe, needs low-latency to the
    > processor. Or you could just say, "I'm right you're wrong, go
    > look for the needle in the hay-stack". Oh, you did.

    I gave you some examples, you ignored them. I gave you some
    references to look at, you ignored them. I don't really see
    the point of writing a 2000 word essay on OpenGL hardware,
    API and a specific algorithm when you're saying that OpenGL
    is irrelevant.

    You can lead a horse to water, but you can't make it drink.

    *shrug*
  44. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    In article <1084754163.22749@teapot.planet.gong>, roo@try-
    removing-this.darkboong.demon.co.uk says...
    > KR Williams wrote:
    >
    > > Instead of telling people how smart you are, why don't you tell
    > > me what, in the graphics pipe, needs low-latency to the
    > > processor. Or you could just say, "I'm right you're wrong, go
    > > look for the needle in the hay-stack". Oh, you did.
    >
    > I gave you some examples, you ignored them. I gave you some
    > references to look at, you ignored them. I don't really see
    > the point of writing a 2000 word essay on OpenGL hardware,
    > API and a specific algorithm when you're saying that OpenGL
    > is irrelevant.

    No, you didn't. You keep referring to the APIs, yet don't point
    to anything specific. You don't teach anything with respect to
    how these things affect performance. You can be as smug as you
    wish, but...
    >
    > You can lead a horse to water, but you can't make it drink.

    ....you lie, Rupert.

    > *shrug*

    Indeed.

    --
    Keith
  45. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    KR Williams wrote:
    > In article <1084558988.486952@teapot.planet.gong>, roo@try-
    > removing-this.darkboong.demon.co.uk says...

    [SNIP]

    >>I don't really get why you're grinding an axe against AGP to be
    >>honest, it's just a faster and fatter pipe than stock PCI. No
    >>big deal, and it does appear to make a difference, ask folks
    >>who have used identical spec cards in PCI and AGP flavours.
    >
    >
    > Oh, my! I've gone and insulted Rupert's sensibilities again.
    >
    > Your logic is impeccable. AGP is faster, and wider(?) than PCI,
    > so it's god's (or Intel, same thing I guess) gift to humanity.

    Not really. For me upping the framerate by ~20% made the difference
    between a game being playable and it being unplayable. Not a big
    deal in the world of rocket science, but that kind of thing matters
    to a lot of folks who play games.

    > Good grief, you compare a stripped point-to-point connection (PCI
    > cut to the bone, actually) to a cheap PCI 32/33 *BUS*
    > implementation and then proclaim how wonderful it is. Sure AGP
    > is faster than the cheapest PCI implementation. Was that your
    > whole point?

    In that case, yes. Where were the alternatives to AGP that would
    have provided the extra bandwidth, yet kept the characteristics
    required to maintain backward compatibility AND do all that at
    a minimal price point for both the vendor and customer ? I didn't
    see PCI Express or PCI-X leaping into the chipsets at the time.

    As unclever or ugly as AGP maybe, it has been an effective and
    inexpensive solution for it's vendors and customers.

    Cheers,
    Rupert
  46. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    In article <1084755234.917279@teapot.planet.gong>, roo@try-
    removing-this.darkboong.demon.co.uk says...
    > KR Williams wrote:
    > > In article <1084558988.486952@teapot.planet.gong>, roo@try-
    > > removing-this.darkboong.demon.co.uk says...
    >
    > [SNIP]
    >
    > >>I don't really get why you're grinding an axe against AGP to be
    > >>honest, it's just a faster and fatter pipe than stock PCI. No
    > >>big deal, and it does appear to make a difference, ask folks
    > >>who have used identical spec cards in PCI and AGP flavours.
    > >
    > >
    > > Oh, my! I've gone and insulted Rupert's sensibilities again.
    > >
    > > Your logic is impeccable. AGP is faster, and wider(?) than PCI,
    > > so it's god's (or Intel, same thing I guess) gift to humanity.
    >
    > Not really. For me upping the framerate by ~20% made the difference
    > between a game being playable and it being unplayable. Not a big
    > deal in the world of rocket science, but that kind of thing matters
    > to a lot of folks who play games.

    How much is tat due to the faster pipe? ...and how much to what
    AGP brings to the table? AGP brings nothing other than a faster
    pipe.
    >
    > > Good grief, you compare a stripped point-to-point connection (PCI
    > > cut to the bone, actually) to a cheap PCI 32/33 *BUS*
    > > implementation and then proclaim how wonderful it is. Sure AGP
    > > is faster than the cheapest PCI implementation. Was that your
    > > whole point?
    >
    > In that case, yes. Where were the alternatives to AGP that would
    > have provided the extra bandwidth, yet kept the characteristics
    > required to maintain backward compatibility AND do all that at
    > a minimal price point for both the vendor and customer ? I didn't
    > see PCI Express or PCI-X leaping into the chipsets at the time.

    Backwards compatibility? AGP was compatible with exactly what?
    AGP was *designed* to simply allow the textures to be put in
    system memory. A *very* bad idea. Indeed, perhaps AGP put off
    better solutions many years.

    > As unclever or ugly as AGP maybe, it has been an effective and
    > inexpensive solution for it's vendors and customers.

    Bad ideas are often pushed on the consumer hard enough that there
    is no choice. I can think of many such bad ideas (some even
    worse than UMA and AGP). WinPrinters and WinModems come to mind.
    Intel was right in there on these too.

    I may have a tough spot in my soul for Intel, dreaming for what
    might have been (and technically possible), but you're a lackey
    for what is. I'm quite sure you don't treat M$ so kindly for
    *WHAT IS*.

    --
    Keith
  47. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    KR Williams wrote:
    > In article <tf4oc.7727$ZX2.6238@newssvr24.news.prodigy.com>,
    > redelm@ev1.net.invalid says...
    >
    >>In comp.sys.ibm.pc.hardware.chips Robin KAY <komadori@myrealbox.com> wrote:
    >>
    >>>Why would you have your main processor(s) render a scene
    >>>when you have a dedicated graphics processor to do it?
    >>
    >>I think you're talking 3-D while Keith is talking 2-D.
    >
    >
    > Nope. 3-D is no different. AGP wuz supposed to make the
    > graphics channel two-way so the graphics card could access main
    > memory. DO you know anyone that actually does this? PLease!
    > With 32MB (or 128MB) on the graphics card, who cares?

    Read some of the "optimising your game for a modern 3D card"
    presentations on the NVidia or ATI developer web sites. You want to
    decouple the CPU from the graphics card as much as possible, to
    eliminate "dead time" when the CPU waits for the card to finish
    something, or the card waits for more data. The card has lots of RAM on
    it, but the textures, vertex data, etc have to get into that RAM
    somehow... and some applications have more texture or vertex data than
    can efficiently fit into the card RAM. A 32MB card running at 1024x768,
    with 24-bit colour, 8-bit alpha, 24-bit Z, 8-bit stencil, double
    buffered, needs about 10MB of video RAM. Some games have more than 22MB
    of total textures these days, and some vertex data is dynamically
    generated for each frame. You need an efficient way to push the data up
    to the card without forcing either the CPU or the card to wait.

    Having the card do bus mastering allows the CPU to set up a big DMA ring
    buffer for commands, which the card slurps from in a decoupled way, and
    the card can then also slurp texture and vertex data from other memory
    areas which are set up in advance by the CPU. There are special
    primitives which allow the CPU to coordinate this bus mastering activity
    so that they don't step on each other's data, while maintaining as much
    concurrency as possible.

    So that's the motivation for the card doing bus mastering. AGP brings
    two extra things to the picture: higher speed than commodity PCI, and a
    simple IOMMU, which gives the graphics card a nice contiguous DMA
    virtual address space that maps onto (potentially) scattered 4K blocks
    of memory.

    >>In 3-D there's simply too much drudge work (shading,
    >>perspective) and not enough interaction back to the control
    >>program to need or want the CPU. 2-D is much simpler and often
    >>requires considerable CPU interactivity (CAD) with the display.
    >
    >
    > Sure, so why does the 3-D card want to go back to main memory,
    > again? The graphics pipe is amazingly one-directional. ...and
    > thus not sensitive to latency, any more than in human terms.

    Exactly. By using bus mastering, you let the CPU and card work in
    parallel, at the expense of increased latency for certain operations.
    Reading back the frame buffer contents in a straightforward way (i.e.
    with core OpenGL calls) is a really great way to kill your frame rate in
    3D games, because you cause all the rendering hardware to grind to a
    halt while the frame buffer data is copied back. The graphics card
    vendors really, really want you to use their decoupled "give us a lump
    of memory and we'll DMA the frame buffer data back when it's finished
    baking, meanwhile keep feeding me data!" OpenGL extensions to do this.

    -Jason
  48. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    KR Williams wrote:

    [SNIP]

    > Backwards compatibility? AGP was compatible with exactly what?

    Compatibility with pre-AGP software.

    > AGP was *designed* to simply allow the textures to be put in
    > system memory. A *very* bad idea. Indeed, perhaps AGP put off
    > better solutions many years.

    If you consider putting shitloads of RAM onto the graphics card
    a solution I don't think it slowed that down at all. What it did
    enable was low-cost solutions *at the time it came out*, the kind
    of solutions that would suit kiddies who would break their piggy
    bank to play a game.

    [SNIP]

    > I may have a tough spot in my soul for Intel, dreaming for what
    > might have been (and technically possible), but you're a lackey

    OK, I'll bite. What might have been when AGP was first mooted ?

    > for what is. I'm quite sure you don't treat M$ so kindly for

    In the context of this discussion your assertion of being a "lackey
    for what is" is wrong anyway. It flatly ignores my preference which
    is render into main memory and DMA the framebuffer to the RAMDAC.
    Nice and simple, lots of control for the programmer. However I do
    recognise this is not a good solution right now because of the way
    the hardware is structured and the design trade-offs.

    > *WHAT IS*.

    I never have liked MS stuff to be honest. Never liked x86s either,
    but on the other hand Intel contributed heavily to PCI and on
    balance I think that has been a valuable contribution to the
    industry as a whole.

    Cheers,
    Rupert
  49. Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

    In article <1084811783.515695@teapot.planet.gong>, roo@try-
    removing-this.darkboong.demon.co.uk says...
    > KR Williams wrote:
    >
    > [SNIP]
    >
    > > Backwards compatibility? AGP was compatible with exactly what?
    >
    > Compatibility with pre-AGP software.

    Come on. That's trivial for any port. Map the addresses in the
    same range and go for it.
    >
    > > AGP was *designed* to simply allow the textures to be put in
    > > system memory. A *very* bad idea. Indeed, perhaps AGP put off
    > > better solutions many years.
    >
    > If you consider putting shitloads of RAM onto the graphics card
    > a solution I don't think it slowed that down at all. What it did
    > enable was low-cost solutions *at the time it came out*, the kind
    > of solutions that would suit kiddies who would break their piggy
    > bank to play a game.

    That's *precisely* what I advocated at the time. Memory sizes
    grew (and costs fell) to where this was not only possible, but
    mandatory at the same time AGP became available. Indeed the only
    things that used AGP (system memory resident) textures were Intel
    demos. Impressive, but hardly useful. Graphics cars have
    outstripped AGP usage ever since.
    >
    > [SNIP]
    >
    > > I may have a tough spot in my soul for Intel, dreaming for what
    > > might have been (and technically possible), but you're a lackey
    >
    > OK, I'll bite. What might have been when AGP was first mooted ?

    The first day it was shipped in a product. Graphics cards were
    even then shipping with more (texture) memory than the games of
    the day were using. It was a *bad* idea, much like UMA. Memory
    is and was cheap. 32MB cards were normal then, and 128MB cheap
    now. There would be even more memory on graphics cards if there
    were a reason. Like I said earlier, even my 2D card has 32MB.

    > > for what is. I'm quite sure you don't treat M$ so kindly for
    >
    > In the context of this discussion your assertion of being a "lackey
    > for what is" is wrong anyway. It flatly ignores my preference which
    > is render into main memory and DMA the framebuffer to the RAMDAC.

    Oh, my! NO wonder we disagree so much. I have *NO* interest in
    bottling up main memory with such trivia. I'm sure you liked UMA
    too. Let me ask you; Do you have an integrated UMA graphics
    controller on your system?

    > Nice and simple, lots of control for the programmer. However I do
    > recognise this is not a good solution right now because of the way
    > the hardware is structured and the design trade-offs.

    It is a *horrible* idea. It puts too much stress on the exact
    wrong area of the system. UMA not only affects memory bandwidth,
    but latency. I'd rather not give up either and throw it all at
    the processor. Perhaps it's because I know what's possible in
    hardware and you're simply dreaming of a perfect world (again).

    > > *WHAT IS*.
    >
    > I never have liked MS stuff to be honest. Never liked x86s either,
    > but on the other hand Intel contributed heavily to PCI and on
    > balance I think that has been a valuable contribution to the
    > industry as a whole.

    I'm not a PCI fan either, but it is what is. I've gotten over my
    anger at stupid marketing and have learned to accept the
    inevitable (and have even designed to it, though it's
    unnecessarily ugly).

    I've never had an issue with x86. I even liked segmentation,
    unless I had to do huge data structures. :-( ...which was rare
    as a hardware type. :-)

    Amazing the difference in perspective between hardware wonks and
    software weenies. ;-)


    --
    Keith
Ask a new question

Read More

CPUs Hardware PCI Express