Sign in with
Sign up | Sign in
Your question

Athlpn 64 PCI Express chipsets?

Last response: in CPUs
Share
Anonymous
a b à CPUs
June 23, 2004 10:15:35 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Vague rumor has it that the first Athlon 64 PCI Express chipsets will
ship in Q3 sometime. Anyone know any more details than that?

Cheers!
Rob
Anonymous
a b à CPUs
June 23, 2004 10:15:36 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Wed, 23 Jun 2004 18:15:35 GMT, Rob Jellinghaus
<robj@unrealities.com> wrote:
>
>Vague rumor has it that the first Athlon 64 PCI Express chipsets will
>ship in Q3 sometime. Anyone know any more details than that?

VIA, nVidia and SiS all have plans for some Athlon64 chipsets with PCI
Express. If my memory serves me correctly, they were all supposed to
release their products in May or June, though obviously those dates
just aren't going to happen. Q3 seems like a reasonable prediction
for the first chips, though I wouldn't expect much from either the
Intel-processor or AMD-processor systems until Q4 or even Q1 of 2005.

PCI Express really seems to be a solution searching for a problem at
the moment, so I doubt that it will be quick to catch on. At best it
just seems like a way to unify AGP, PCI, CSA and maybe even PCI-X into
a single bus, eventually making things cheaper (though probably not
any better/faster). Of course, the new-factor will keep it expensive
for a 6-8 months, hence the reason why I doubt we'll see too much PCI
Express action this year.

-------------
Tony Hill
hilla <underscore> 20 <at> yahoo <dot> ca
Anonymous
a b à CPUs
June 24, 2004 6:04:11 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Wed, 23 Jun 2004 15:33:43 -0400, Tony Hill <hilla_nospam_20@yahoo.ca>
wrote:

>On Wed, 23 Jun 2004 18:15:35 GMT, Rob Jellinghaus
><robj@unrealities.com> wrote:
>>
>>Vague rumor has it that the first Athlon 64 PCI Express chipsets will
>>ship in Q3 sometime. Anyone know any more details than that?
>
>VIA, nVidia and SiS all have plans for some Athlon64 chipsets with PCI
>Express. If my memory serves me correctly, they were all supposed to
>release their products in May or June, though obviously those dates
>just aren't going to happen. Q3 seems like a reasonable prediction
>for the first chips, though I wouldn't expect much from either the
>Intel-processor or AMD-processor systems until Q4 or even Q1 of 2005.
>
>PCI Express really seems to be a solution searching for a problem at
>the moment, so I doubt that it will be quick to catch on. At best it
>just seems like a way to unify AGP, PCI, CSA and maybe even PCI-X into
>a single bus, eventually making things cheaper (though probably not
>any better/faster). Of course, the new-factor will keep it expensive
>for a 6-8 months, hence the reason why I doubt we'll see too much PCI
>Express action this year.

There are sound reasons why PCI Express is a solution for *existing* problems,
made all the better once you factor in cost/performance. And given that PCI-X
Mode 2 is an utter non-starter, the parallel PCI bus paradigm was quickly
running out of gas anyway. Time for a paradigm shift.

PCI Express in its *current* incarnation whups PCI-X Mode 1's ass, never mind
PCI-E 2.0 or beyond. But the cost of connectivity is where the rubber hits the
road.

How many PCI-X devices can you hang on a bus? Not many.
So how do you make more PCI or PCI-X buses?
Bridges and pins. Lots and lots of pins, and conceivably, many bridges.

otoh, PCI Express allows you to dial up some prodigious bandwidth using very
few pins, which can dramatically cut down the number of silicon chunks on the
board.

fwiw, I happened to get a tour of a new HP dual Xeon rack mount box recently.
Had 6 64b PCI-X slots and a single PCI slot, plus the usual assortment of
embedded PCI devices (graphics, network, server management, legacy IO, etc).
There were 3 PCIX bridge chips and a PCI bridge chip, on top of the host
bus/memory bridge. Many bridges, many many pins.

otoh, 3-chip set of a low-end MCH, a PXH, and a 31154 would provide at least
two PCI Express slots, three 64b/100mhz PCI-X slots, a spot to hang a
64b/100mhz dual gigabit chip, a couple of 64b/33mhz PCI slots and a hose for
the integrated SATA, ATA133, VGA, USB2 and server management devices. If you
needed more PCI Express slots, you can use the beau coup deluxe version of the
MCH instead. And they'd both be a hell of a lot easier to route.

I'd rather do PCI Express designs.
They're easier, they're cheaper. What's not to like? ;-)

And I suspect the PCI Express market adoption speed will surprise many...

/daytripper
Related resources
Anonymous
a b à CPUs
June 29, 2004 1:37:26 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Thu, 24 Jun 2004 02:04:11 GMT, daytripper
<day_trippr@REMOVEyahoo.com> wrote:
>On Wed, 23 Jun 2004 15:33:43 -0400, Tony Hill <hilla_nospam_20@yahoo.ca>
>wrote:
>
>>On Wed, 23 Jun 2004 18:15:35 GMT, Rob Jellinghaus
>><robj@unrealities.com> wrote:
>>>
>>>Vague rumor has it that the first Athlon 64 PCI Express chipsets will
>>>ship in Q3 sometime. Anyone know any more details than that?
>>
>>VIA, nVidia and SiS all have plans for some Athlon64 chipsets with PCI
>>Express. If my memory serves me correctly, they were all supposed to
>>release their products in May or June, though obviously those dates
>>just aren't going to happen. Q3 seems like a reasonable prediction
>>for the first chips, though I wouldn't expect much from either the
>>Intel-processor or AMD-processor systems until Q4 or even Q1 of 2005.
>>
>>PCI Express really seems to be a solution searching for a problem at
>>the moment, so I doubt that it will be quick to catch on. At best it
>>just seems like a way to unify AGP, PCI, CSA and maybe even PCI-X into
>>a single bus, eventually making things cheaper (though probably not
>>any better/faster). Of course, the new-factor will keep it expensive
>>for a 6-8 months, hence the reason why I doubt we'll see too much PCI
>>Express action this year.
>
>There are sound reasons why PCI Express is a solution for *existing* problems,

Perhaps I should have qualified that I was referring to desktop and
workstations here, ie the sorts of systems that will run Athlon64
processors and the sorts of systems that are being targeted (right
now) by PCI-E.

Given that restriction, just what problems does PCI Express solve?
Graphics with AGP 8x is not particularly bandwidth limited, it's quite
rare that 8x makes any improvement at all over 4x, let alone going
beyond that (in a few years this will change of course). Gigabit
ethernet is one potential area of concern, but it's been pushed onto
CSA for Intel solutions and even hanging off the PCI bus it really
isn't all that bad. Hard drive controllers have been pulled off the
PCI bus. So what does that leave? Sound, USB and the legacy stuff
for the most part.

>made all the better once you factor in cost/performance. And given that PCI-X
>Mode 2 is an utter non-starter, the parallel PCI bus paradigm was quickly
>running out of gas anyway. Time for a paradigm shift.
>
>PCI Express in its *current* incarnation whups PCI-X Mode 1's ass, never mind
>PCI-E 2.0 or beyond. But the cost of connectivity is where the rubber hits the
>road.
>
>How many PCI-X devices can you hang on a bus? Not many.
>So how do you make more PCI or PCI-X buses?
>Bridges and pins. Lots and lots of pins, and conceivably, many bridges.
>
>otoh, PCI Express allows you to dial up some prodigious bandwidth using very
>few pins, which can dramatically cut down the number of silicon chunks on the
>board.

Long term cost definitely does benefit PCI Express, and that is why I
think it will eventually be a good thing. However the short-term cost
just isn't there because of the "first-on-the-block" factor. I don't
see any reason to rush out after PCI Express this year.

>fwiw, I happened to get a tour of a new HP dual Xeon rack mount box recently.
>Had 6 64b PCI-X slots and a single PCI slot, plus the usual assortment of
>embedded PCI devices (graphics, network, server management, legacy IO, etc).
>There were 3 PCIX bridge chips and a PCI bridge chip, on top of the host
>bus/memory bridge. Many bridges, many many pins.

And under a normal configuration, how many of those slots are actually
used? If it's a big disk box it might have a couple of RAID Array
cards in there and probably two gigabit ethernet ports. What else
needs the bandwidth?

>otoh, 3-chip set of a low-end MCH, a PXH, and a 31154 would provide at least
>two PCI Express slots, three 64b/100mhz PCI-X slots, a spot to hang a
>64b/100mhz dual gigabit chip, a couple of 64b/33mhz PCI slots and a hose for
>the integrated SATA, ATA133, VGA, USB2 and server management devices. If you
>needed more PCI Express slots, you can use the beau coup deluxe version of the
>MCH instead. And they'd both be a hell of a lot easier to route.
>
>I'd rather do PCI Express designs.
>They're easier, they're cheaper. What's not to like? ;-)

What's not to like is that there are virtually no PCI Express cards
out there and the first ones that show up carry a price-premium (as
will the first boards and systems).

-------------
Tony Hill
hilla <underscore> 20 <at> yahoo <dot> ca
Anonymous
a b à CPUs
July 2, 2004 4:22:35 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

> Given that restriction, just what problems does
> PCI Express solve?

For the normal home user, not much. For the server
or pro workstation user, it helps.

PCI-E is probably cheaper than PCI-X for very
high speed connects like 10 Gbit LAN. Parallel
PCI is as wide as it will ever get, and may be
at the practical/economic limit for clock.

PCI-E solves a problem for multi-display
workstations, which today either need really
overloaded single AGP cards, or helper cards
in PCI slots. AGP is a slot, not a bus.

I haven't seen a detailed PCI-E spec. Does it
add any more power than AGP and PCI allow? Many
AGP-Pro cards end up having to steal power from
an HDD connector, or even jack it in through the
bulkhead from an external AC adaptor.

> What's not to like is that there are virtually
> no PCI Express cards out there and the first ones
> that show up carry a price-premium (as
> will the first boards and systems).

And there probably won't be any performance benefit
for the first-generation stuff, which is typical
for bus transitions. Anyone considering a new
machine over the next 6 months may want to ignore
PCI-Express, unless they like being an unpaid
Gamma-tester.

--
Regards, PO Box 248
Bob Niland Enterprise
mailto:name@ispname.tld Kansas USA
which, due to spam, is: 67441-0248
email4rjn AT yahoo DOT com
http://www.access-one.com/rjn

Unless otherwise specifically stated, expressing
personal opinions and NOT speaking for any
employer, client or Internet Service Provider.
Anonymous
a b à CPUs
July 2, 2004 5:10:15 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

Tony Hill wrote:

> PCI Express really seems to be a solution searching for a problem at
> the moment, so I doubt that it will be quick to catch on. At best it
> just seems like a way to unify AGP, PCI, CSA and maybe even PCI-X into
> a single bus, eventually making things cheaper (though probably not
> any better/faster). Of course, the new-factor will keep it expensive
> for a 6-8 months, hence the reason why I doubt we'll see too much PCI
> Express action this year.

NVIDIA plans to use PCI Express to pair two GPUs.

http://anandtech.com/video/showdoc.html?i=2097
http://techreport.com/etc/2004q2/nvidia-sli/
Anonymous
a b à CPUs
July 3, 2004 4:58:08 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On 2 Jul 2004 12:22:35 -0700, email4rjn@yahoo.com (Bob Niland) wrote:
[snipped]
>I haven't seen a detailed PCI-E spec. Does it
>add any more power than AGP and PCI allow? Many
>AGP-Pro cards end up having to steal power from
>an HDD connector, or even jack it in through the
>bulkhead from an external AC adaptor.

No real gain on the play: standard full-height PCI-X 1.0a x1 through x16 cards
have a 25w limit, however x16 graphics cards are allowed to transition from a
25w limit at power-on to a 60w limit once the card is configured as a "high
power" device...

fwiw, the connector system has our power engineers in a collective lather...

/daytripper
July 3, 2004 2:35:44 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

daytripper <day_trippr@REMOVEyahoo.com> wrote:
: On 2 Jul 2004 12:22:35 -0700, email4rjn@yahoo.com (Bob Niland) wrote:
: [snipped]
:: I haven't seen a detailed PCI-E spec. Does it
:: add any more power than AGP and PCI allow? Many
:: AGP-Pro cards end up having to steal power from
:: an HDD connector, or even jack it in through the
:: bulkhead from an external AC adaptor.
:
: No real gain on the play: standard full-height PCI-X 1.0a x1 through
: x16 cards have a 25w limit, however x16 graphics cards are allowed to
: transition from a 25w limit at power-on to a 60w limit once the card
: is configured as a "high power" device...
:
: fwiw, the connector system has our power engineers in a collective
: lather...

Ok, I'll bite. Why?

J.
(always curious when daytrip drops this little cliffhangers)
Anonymous
a b à CPUs
July 4, 2004 3:25:23 AM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

On Sat, 3 Jul 2004 10:35:44 +0200, "jack" <jack@ibm.com> wrote:

>daytripper <day_trippr@REMOVEyahoo.com> wrote:
>: On 2 Jul 2004 12:22:35 -0700, email4rjn@yahoo.com (Bob Niland) wrote:
>: [snipped]
>:: I haven't seen a detailed PCI-E spec. Does it
>:: add any more power than AGP and PCI allow? Many
>:: AGP-Pro cards end up having to steal power from
>:: an HDD connector, or even jack it in through the
>:: bulkhead from an external AC adaptor.
>:
>: No real gain on the play: standard full-height PCI-X 1.0a x1 through
>: x16 cards have a 25w limit, however x16 graphics cards are allowed to
>: transition from a 25w limit at power-on to a 60w limit once the card
>: is configured as a "high power" device...
>:
>: fwiw, the connector system has our power engineers in a collective
>: lather...
>
>Ok, I'll bite. Why?
>
>J.
>(always curious when daytrip drops this little cliffhangers)

There are a total of 3 pins for 3.3v, from the x1 connector through the x16
connector. That's a scant 3A at the full contact rating, but in our segment we
normally derate similar connector contact ratings by 50% - which would make
25w per slot a technical fantasy...

/daytripper (Other than that, not much to worry about ;-)
July 4, 2004 4:27:21 PM

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

daytripper <day_trippr@REMOVEyahoo.com> wrote:
<snip>

: There are a total of 3 pins for 3.3v, from the x1 connector through
: the x16 connector. That's a scant 3A at the full contact rating, but
: in our segment we normally derate similar connector contact ratings
: by 50% - which would make 25w per slot a technical fantasy...
:
: /daytripper (Other than that, not much to worry about ;-)

Thanks Daytrip (I think ;-).

J.
!