PCI-Express over Cat6

G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

On Wed, 05 May 2004 16:10:36 GMT, "Yousuf Khan"
<news.tally.bbbl67@spamgourmet.com> wrote:

>http://www.extremetech.com/article2/0,1558,1585024,00.asp?kc=ETRSS02129TX1K0000532
>
> Yousuf Khan

<yawn>

While the world pushes tighter integration, who does TI think is going to pile
on to a proprietary way to split a system into chunks?

I love the bit about remoting HID devices.
Yeah, there's a high-throughput market to exploit...

/daytripper (everything dumb is new again in Texas ;-)
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

daytripper wrote:
> On Wed, 05 May 2004 16:10:36 GMT, "Yousuf Khan"
>
>>
http://www.extremetech.com/article2/0,1558,1585024,00.asp?kc=ETRSS02129TX1K0000532
>
> <yawn>
>
> I love the bit about remoting HID devices.
> Yeah, there's a high-throughput market to exploit...

Maybe it's for the really, really, really fast typers? :)

I guess in their haste to get a press release out they forgot that this sort
of job is already done by USB?

I'm sure they have much more important ideas in mind behind it, but none of
which really excite nor matter to typical home users. Things like clustering
interconnects or remote storage devices.

> While the world pushes tighter integration, who does TI think is
> going to pile on to a proprietary way to split a system into chunks?

As I said, maybe they have some really big ideas, they just weren't smart
enough to make it sound exciting on a press release. :)

Yousuf Khan
 

user

Splendid
Dec 26, 2003
3,943
0
22,780
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

X-No-Archive: yes


"Yousuf Khan" <news.20.bbbl67@spamgourmet.com> wrote in message
news:jjkmc.415829$2oI1.158440@twister01.bloor.is.net.cable.rogers.com...
> daytripper wrote:
> > On Wed, 05 May 2004 16:10:36 GMT, "Yousuf Khan"
> >
> >>
>
http://www.extremetech.com/article2/0,1558,1585024,00.asp?kc=ETRSS02129TX1K0000532
> >
> > <yawn>
> >
> > I love the bit about remoting HID devices.
> > Yeah, there's a high-throughput market to exploit...
>
> Maybe it's for the really, really, really fast typers? :)
>
> I guess in their haste to get a press release out they forgot that this
sort
> of job is already done by USB?
>
> I'm sure they have much more important ideas in mind behind it, but none
of
> which really excite nor matter to typical home users. Things like
clustering
> interconnects or remote storage devices.
>
> > While the world pushes tighter integration, who does TI think is
> > going to pile on to a proprietary way to split a system into chunks?
>
> As I said, maybe they have some really big ideas, they just weren't smart
> enough to make it sound exciting on a press release. :)
>
> Yousuf Khan
>
>

Maybe we could one day get little modules with just the CPU and Gig-Ethernet
port to Add extra processing power.
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

daytripper wrote:
> On Wed, 05 May 2004 16:10:36 GMT, "Yousuf Khan"
> <news.tally.bbbl67@spamgourmet.com> wrote:
>
>
>>http://www.extremetech.com/article2/0,1558,1585024,00.asp?kc=ETRSS02129TX1K0000532
>>
>
>
> <yawn>
>
> While the world pushes tighter integration, who does TI think is going to pile
> on to a proprietary way to split a system into chunks?
>

What can be used to take apart can also be used to put together. What
TI has done seems like some version of I/O that Intel was pushing...only
it's not Intel silicon, just like Infiniband isn't Intel silicon. How
will Intel react to this one: cut loose PCI-Express?

I've crossposted to comp.arch to see if I can't attract comments about
how real this is and what effects if might have outside the Intel/PC
marketplace.

RM
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

"Robert Myers" <rmyers@rustuck.com> wrote in message
news:RIrmc.32533$_41.2657354@attbi_s02...
> daytripper wrote:
> > On Wed, 05 May 2004 16:10:36 GMT, "Yousuf Khan"
> > <news.tally.bbbl67@spamgourmet.com> wrote:
> >
> >
>
>>http://www.extremetech.com/article2/0,1558,1585024,00.asp?kc=ETRSS02129TX1
K0000532
> >>
> >
> >
> > <yawn>
> >
> > While the world pushes tighter integration, who does TI think is going
to pile
> > on to a proprietary way to split a system into chunks?
> >
>
> What can be used to take apart can also be used to put together. What
> TI has done seems like some version of I/O that Intel was pushing...only
> it's not Intel silicon, just like Infiniband isn't Intel silicon. How
> will Intel react to this one: cut loose PCI-Express?
>
> I've crossposted to comp.arch to see if I can't attract comments about
> how real this is and what effects if might have outside the Intel/PC
> marketplace.
>
> RM

They are sending a 1X pci express over 4 pairs of CAT6 which is better than
CAT5 which 1000baseT uses.
They don't say how long the cable is. Ethernet uses 50 to 100 meters. 2-5
meters is a lot easier.
PCI-express is 2.5 Gb/s on the wire, GigE is 1.250 on the wire,
PCI express is working on cabling extensions. Intel is big on it. Why
would this make them upset?
1X PCI express is equivilent, roughly, to the 66MHz 32 bit PCI slot. Or
maybe to a 66 by 64 due to being duplex.
Many folks don't like to open the box to add stuff to their computer. This
is an alternative to things like firewire and USB2 as a way to add stuff.

del cecchi
>
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

"Yousuf Khan" <news.20.bbbl67@spamgourmet.com> wrote in message
news:U9kmc.415731$2oI1.408879@twister01.bloor.is.net.cable.rogers.com...
> Judd wrote:
> > "Yousuf Khan" <news.tally.bbbl67@spamgourmet.com> wrote in message
> >>
> >
>
http://www.extremetech.com/article2/0,1558,1585024,00.asp?kc=ETRSS02129TX1K0000532
> >
> > Nice... too bad all of our cabling is Cat5.
>
> Well, I'm sure the motherboard makers will provide you with some Cat6 to
> connect your PCI-E devices remotely with. :)
>

I'm thinking office... not so much motherboard. It's application could be
far reaching from an office standpoint, but cabling would need to be
upgraded infrastructure-wise.
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

Distributed systems, distributed redundant systems, or perhaps there
are some really hot noisy graphics cards planned that cook in BTX
environment and thus need to be relocated to a different... building :)

USB is a nice idea - but it's implementation seems somewhat variable,
with reliability issues from chipsets to firmware. HDs can vanish on you,
scanners can stop working, printers can sometimes refuse to be seen.
Self power seems particularly marginal with blown or pico fuse resets.

Latency could be interesting tho - Myrinet isn't exactly cheap.

IT industry seems to be creating a lot of Beta v VHS right now.
--
Dorothy Bradbury
www.stores.ebay.co.uk/panaflofan for fans, books & other items
http://homepage.ntlworld.com/dorothy.bradbury/panaflo.htm (Direct)
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

"Dorothy Bradbury" <dorothy.bradbury@ntlworld.com> wrote in message
news:mozmc.195$wA1.29@newsfe2-gui.server.ntli.net...
> Distributed systems, distributed redundant systems, or perhaps there
> are some really hot noisy graphics cards planned that cook in BTX
> environment and thus need to be relocated to a different... building
:)
>
> USB is a nice idea - but it's implementation seems somewhat variable,
> with reliability issues from chipsets to firmware. HDs can vanish on
you,
> scanners can stop working, printers can sometimes refuse to be seen.
> Self power seems particularly marginal with blown or pico fuse resets.
>
> Latency could be interesting tho - Myrinet isn't exactly cheap.
>
> IT industry seems to be creating a lot of Beta v VHS right now.
> --
> Dorothy Bradbury
> www.stores.ebay.co.uk/panaflofan for fans, books & other items
> http://homepage.ntlworld.com/dorothy.bradbury/panaflo.htm (Direct)
>
Unless they pretty radically change (extend) the pci-express physical
layer and probably some stuff about the architecture across the room is
about what you can hope for. And the room better not be too big.

del cecchi

PS Implementations are always variable, unless there is only one.
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

On Thu, 6 May 2004 20:57:07 -0500, "del cecchi" <dcecchi.nojunk@att.net>
wrote:

>
>"Dorothy Bradbury" <dorothy.bradbury@ntlworld.com> wrote in message
>news:mozmc.195$wA1.29@newsfe2-gui.server.ntli.net...
>> Distributed systems, distributed redundant systems, or perhaps there
>> are some really hot noisy graphics cards planned that cook in BTX
>> environment and thus need to be relocated to a different... building
>:)
>>
>> USB is a nice idea - but it's implementation seems somewhat variable,
>> with reliability issues from chipsets to firmware. HDs can vanish on
>you,
>> scanners can stop working, printers can sometimes refuse to be seen.
>> Self power seems particularly marginal with blown or pico fuse resets.
>>
>> Latency could be interesting tho - Myrinet isn't exactly cheap.
>>
>> IT industry seems to be creating a lot of Beta v VHS right now.
>> --
>> Dorothy Bradbury
>> www.stores.ebay.co.uk/panaflofan for fans, books & other items
>> http://homepage.ntlworld.com/dorothy.bradbury/panaflo.htm (Direct)
>>
>Unless they pretty radically change (extend) the pci-express physical
>layer and probably some stuff about the architecture across the room is
>about what you can hope for. And the room better not be too big.

Well, hell, even I will give them more credit than that. There's no real need
to change the PCI Express architecture to do what TI's (probably) doing, just
send an n-bit wide link to a bridge device and you're good to go nuts bolting
on devices until you've squeezed that link to the last bps.

Physical layer changes are likely quite modest - just enough to get them a
patent of some kind (the article did imply it was somehow proprietary). otoh,
"proprietary" is unlikely to fly far as an io interconnect. Nobody likes
paying tribute, and afaict there's no obvious need to stray from the
soon-to-be-well-trod path (PCI-X Mode 2 is an utter non-starter now - Intel is
likely going to quietly let it die without ever selling a product with it -
sending the hoards directly to PCI Express) to build rather large systems full
of IO devices.

As for using TI's little scheme for desktop/HID devices instead of USB: it is
to laugh. USB 2.0 fast mode is way overkill for HIDs as it is, it's open and
cheap to implement, brings (modest) power to the devices (not mentioned in
this Cat6 scheme) and from a fair size (but admittedly not huge) sample of
diverse USB 1 & 2 devices in our labs, appears quite mature (finally, yes ;-)

>PS Implementations are always variable, unless there is only one.

lol

Still, not quite as humorous as using "lower latency" in the same sentence
with "HID devices"...

/daytripper
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

Dorothy Bradbury wrote:
> Distributed systems, distributed redundant systems, or perhaps there
> are some really hot noisy graphics cards planned that cook in BTX
> environment and thus need to be relocated to a different... building
> :)

I can also see fairly interesting home use for PCI-E over Cat6: home theatre
applications. Sending the Dolby/DTS sound *and* the HDTV video over the same
wire basically. :)

> USB is a nice idea - but it's implementation seems somewhat variable,
> with reliability issues from chipsets to firmware. HDs can vanish on
> you, scanners can stop working, printers can sometimes refuse to be
> seen. Self power seems particularly marginal with blown or pico fuse
> resets.

But more than good enough for an HID interface.

> IT industry seems to be creating a lot of Beta v VHS right now.

Intel was even bellowing about trying to combine USB and WiFi together to
form Wireless USB which it expects will take on Bluetooth, except be faster
and work over greater distances.

Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

"Judd" <IhateSpam@stopspam.com> wrote in message
news:109m6p15srctua2@corp.supernews.com...
> > Well, I'm sure the motherboard makers will provide you with some Cat6 to
> > connect your PCI-E devices remotely with. :)
> >
>
> I'm thinking office... not so much motherboard. It's application could be
> far reaching from an office standpoint, but cabling would need to be
> upgraded infrastructure-wise.

I seriously doubt that despite the fact that it's a Cat6 wire, it will not
likely go the distances that you typically can take an Ethernet out to. It's
likely only using the cabling of Ethernet without the actual Ethernet
protocol. They're likely going to limit the distances that the cable can
travel in this application.

Yousuf Khan
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

In comp.sys.ibm.pc.hardware.chips Yousuf Khan <news.tally.bbbl67@spamgourmet.com> wrote:
> I seriously doubt that despite the fact that it's a Cat6 wire, it will not
> likely go the distances that you typically can take an Ethernet out to.

Bingo! Ethernet twisted-pair wire isn't so special, its more
the balanced signalling used. Signal+ paired with Signal-.

Ethernet 100baseTX was running 100 MHz across 100m of wild
country at a time when motherborad designers had trouble with
running 50 MHz across 20 cm of multi-layer PCB. But mobo
signals aren't balanced and that gives all sorts of problems.
Balancing the signals would double the AC pincount.

AFAIK, no PCI varient uses balanced signalling, so really
won't benefit from Cat6. IIRC, there was a oddball SCSI
that did used balanced signals.

-- Robert
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

Robert Myers wrote:

> I dunno. I know even less about expansion bus protocols than I do about
> most other things. Is there anything you can do with any available
> out-of-the box interconnect that you can't do with lower latency using
> PCI-Express? Limited bandwidth and distance, to be sure, but how could
> you beat the latency?

Most of the end-to-end latency these days comes from the PCI and the
link (SerDes + distance). So PCI-Express-only would not save much. As
Del noted, Intel is working on a switching extensions to PCI-Express,
but the PCI-Express protocol is not really designed for that: flow
control is very tight, as you would expect on a very short
point-to-point connection. If PCI-Express wants to go outside the box,
it will have to deal with some tough problems with flow control.

Patrick
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

> AFAIK, no PCI varient uses balanced signalling, so really
> won't benefit from Cat6. IIRC, there was a oddball SCSI
> that did used balanced signals.

HVD did I think, which could run very long distances.

Characteristics of Cat5/6 cable soon change if you abuse it,
so distance aside I don't think it's for typical office environments.
--
Dorothy Bradbury
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel (More info?)

<127.0.0.1@127.0.0.1> wrote in message news:2fupk1F2fhrkU1@uni-berlin.de...
> X-No-Archive: yes
>
>
> "Yousuf Khan" <news.20.bbbl67@spamgourmet.com> wrote in message
> news:jjkmc.415829$2oI1.158440@twister01.bloor.is.net.cable.rogers.com...
> > daytripper wrote:
> > > On Wed, 05 May 2004 16:10:36 GMT, "Yousuf Khan"
> > >
> > >>
> >
>
http://www.extremetech.com/article2/0,1558,1585024,00.asp?kc=ETRSS02129TX1K0000532
> > >
> > > <yawn>
> > >
> > > I love the bit about remoting HID devices.
> > > Yeah, there's a high-throughput market to exploit...
> >
> > Maybe it's for the really, really, really fast typers? :)
> >
> > I guess in their haste to get a press release out they forgot that this
> sort
> > of job is already done by USB?
> >
> > I'm sure they have much more important ideas in mind behind it, but none
> of
> > which really excite nor matter to typical home users. Things like
> clustering
> > interconnects or remote storage devices.
> >
> > > While the world pushes tighter integration, who does TI think is
> > > going to pile on to a proprietary way to split a system into chunks?
> >
> > As I said, maybe they have some really big ideas, they just weren't
smart
> > enough to make it sound exciting on a press release. :)
> >
> > Yousuf Khan
> >
> >
>
> Maybe we could one day get little modules with just the CPU and
Gig-Ethernet
> port to Add extra processing power.
>
>

Maybe something like this:
http://www.adlogic-pc104.com/products/cpu/pc104/datasheets/msm855.pdf
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

On Fri, 07 May 2004 13:29:51 -0400, Patrick Geoffray <patrick@myri.com> wrote:

>Robert Myers wrote:
>
>> I dunno. I know even less about expansion bus protocols than I do about
>> most other things. Is there anything you can do with any available
>> out-of-the box interconnect that you can't do with lower latency using
>> PCI-Express? Limited bandwidth and distance, to be sure, but how could
>> you beat the latency?
>
>Most of the end-to-end latency these days comes from the PCI and the
>link (SerDes + distance). So PCI-Express-only would not save much. As
>Del noted, Intel is working on a switching extensions to PCI-Express,
>but the PCI-Express protocol is not really designed for that: flow
>control is very tight, as you would expect on a very short
>point-to-point connection. If PCI-Express wants to go outside the box,
>it will have to deal with some tough problems with flow control.

Respectfully, I disagree with that last sentence. Unless the mission is
redefined, PCI Express can certainly go outside the crate.

/daytripper (the question remains why one would do that...)
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

In comp.arch Del Cecchi <cecchinospam@us.ibm.com> wrote:
>
> IB and PCI-Express should be pretty comparable. (PCI express isn't out of
> the box yet)
> Ethernet with RDMA and hardware offload is in the same ballpark.
> Rapid I/O, Fibre Channel, are contenders depending on task.
>
> Is latency a big deal writing to a disk or graphics card?
>

It can easily be for a graphics card.

--
Sander

+++ Out of cheese error +++
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

Patrick Geoffray <patrick@myri.com> wrote:
+---------------
| Most of the end-to-end latency these days comes from the PCI and the
| link (SerDes + distance). So PCI-Express-only would not save much. As
| Del noted, Intel is working on a switching extensions to PCI-Express,
| but the PCI-Express protocol is not really designed for that: flow
| control is very tight, as you would expect on a very short
| point-to-point connection. If PCI-Express wants to go outside the box,
| it will have to deal with some tough problems with flow control.
+---------------

Indeed. Quite a bit of the difference between "GSN" (a.k.a. the ANSI
HIPPI-6400 standard) and the SGI "XIO" (switched-fabric I/O to multiple
PCI busses) it was based on was the need to increase the low-level
retransmisson buffers and sequence space to allow a potential 1km
distance at full bandwidth[1], compared to the ~10m permitted by XIO.
This added considerably to the die area of the PHY/PMD part.

Also note that at "only" 10m range, XIO devices *already* needed
rather large retransmisson buffers and sequence space...


-Rob

[1] Though note that un-repeatered GSN can still only go 30m in copper.
This is for electrical reasons, not flow-control window size.

-----
Rob Warnock <rpw3@rpw3.org>
627 26th Avenue <URL:http://rpw3.org/>
San Mateo, CA 94403 (650)572-2607
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

On Fri, 07 May 2004 21:18:52 +0000, Sander Vesik wrote:

> In comp.arch Del Cecchi <cecchinospam@us.ibm.com> wrote:
>>
>> IB and PCI-Express should be pretty comparable. (PCI express isn't out of
>> the box yet)
>> Ethernet with RDMA and hardware offload is in the same ballpark.
>> Rapid I/O, Fibre Channel, are contenders depending on task.
>>
>> Is latency a big deal writing to a disk or graphics card?
>>
>
> It can easily be for a graphics card.

Why? Aren't they write-only devices? Surely any latency
limitation is the 10ms or so of human perception, and anything in the
circuitry is neither here nor there at that scale.

--
Andrew
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

Andrew Reilly wrote:
> On Fri, 07 May 2004 21:18:52 +0000, Sander Vesik wrote:
>
>
>>In comp.arch Del Cecchi <cecchinospam@us.ibm.com> wrote:
>>
>>>IB and PCI-Express should be pretty comparable. (PCI express isn't out of
>>>the box yet)
>>>Ethernet with RDMA and hardware offload is in the same ballpark.
>>>Rapid I/O, Fibre Channel, are contenders depending on task.
>>>
>>>Is latency a big deal writing to a disk or graphics card?
>>>
>>
>>It can easily be for a graphics card.
>
>
> Why? Aren't they write-only devices? Surely any latency

Off the top of my head, at least two requirements exist, namely
screenshots and flyback sychronisation...

Cheers,
Rupert
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

On Sun, 09 May 2004 03:23:59 +0100, Rupert Pigott wrote:

> Andrew Reilly wrote:
>> On Fri, 07 May 2004 21:18:52 +0000, Sander Vesik wrote:
>>
>>
>>>In comp.arch Del Cecchi <cecchinospam@us.ibm.com> wrote:
>>>>Is latency a big deal writing to a disk or graphics card?
>>>>
>>>
>>>It can easily be for a graphics card.
>>
>>
>> Why? Aren't they write-only devices? Surely any latency
>
> Off the top of my head, at least two requirements exist, namely
> screenshots and flyback sychronisation...

Both of which appear, on the surface, to be frame-rate type events: i.e.,
in the ballpark of the 10ms event time that I mentioned in the part that
you snipped. Not a latency issue on the order of memory access or
processor cycle times...

[Don't graphics cards generate interrupts for flyback synchronization?]

--
Andrew
 
G

Guest

Guest
Archived from groups: comp.sys.ibm.pc.hardware.chips,comp.sys.intel,comp.arch (More info?)

Andrew Reilly wrote:
> On Sun, 09 May 2004 03:23:59 +0100, Rupert Pigott wrote:
>
>
>>Andrew Reilly wrote:
>>
>>>On Fri, 07 May 2004 21:18:52 +0000, Sander Vesik wrote:
>>>
>>>
>>>
>>>>In comp.arch Del Cecchi <cecchinospam@us.ibm.com> wrote:
>>>>
>>>>>Is latency a big deal writing to a disk or graphics card?
>>>>>
>>>>
>>>>It can easily be for a graphics card.
>>>
>>>
>>>Why? Aren't they write-only devices? Surely any latency
>>
>>Off the top of my head, at least two requirements exist, namely
>>screenshots and flyback sychronisation...
>
>
> Both of which appear, on the surface, to be frame-rate type events: i.e.,
> in the ballpark of the 10ms event time that I mentioned in the part that
> you snipped. Not a latency issue on the order of memory access or

Hmmm, how about querying the state of an OpenGL rendering pipeline
that happens to be sitting on the graphics card ? I don't think that
it's ever been true to say GFX cards are write only, and I'm not sure
I'd ever want that. :)

Cheers,
Rupert