Next generation Opteron 1207 pins!

Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

http://www.theinquirer.net/?article=19729
26 answers Last reply
More about next generation opteron 1207 pins
  1. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    On 17 Nov 2004 09:51:04 -0800, yjkhan@gmail.com (ykhan) wrote:

    >http://www.theinquirer.net/?article=19729

    Wow and I thought 940 was a lot, IOW what AMD is going to do with all
    these extra pins!
    Ed
  2. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    In article <8ranp09qmm5rfdkab6m3kfus05ck42urst@4ax.com>, not@here.com
    says...
    > On 17 Nov 2004 09:51:04 -0800, yjkhan@gmail.com (ykhan) wrote:
    >
    > >http://www.theinquirer.net/?article=19729
    >
    > Wow and I thought 940 was a lot, IOW what AMD is going to do with all
    > these extra pins!

    Gotta hold the chips down somehow (lotsa grounds and Vdd). ;-)

    --
    Keith
  3. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    Ed wrote:
    > On 17 Nov 2004 09:51:04 -0800, yjkhan@gmail.com (ykhan) wrote:
    >
    >
    >>http://www.theinquirer.net/?article=19729
    >
    >
    > Wow and I thought 940 was a lot, IOW what AMD is going to do with all
    > these extra pins!

    12 HyperTransport channels!!!!

    Nah, probably not.
  4. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    Bitstring <8ranp09qmm5rfdkab6m3kfus05ck42urst@4ax.com>, from the
    wonderful person Ed <not@here.com> said
    >On 17 Nov 2004 09:51:04 -0800, yjkhan@gmail.com (ykhan) wrote:
    >
    >>http://www.theinquirer.net/?article=19729
    >
    >Wow and I thought 940 was a lot, IOW what AMD is going to do with all
    >these extra pins!

    Second core, perchance?? Well, you know, all the extra power, ground,
    and data, to keep the second core fed and happy. 8>.

    --
    GSV Three Minds in a Can
    Outgoing Msgs are Turing Tested,and indistinguishable from human typing.
  5. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    On Wed, 17 Nov 2004 23:45:41 +0000, GSV Three Minds in a Can wrote:

    > Bitstring <8ranp09qmm5rfdkab6m3kfus05ck42urst@4ax.com>, from the
    > wonderful person Ed <not@here.com> said
    >>On 17 Nov 2004 09:51:04 -0800, yjkhan@gmail.com (ykhan) wrote:
    >>
    >>>http://www.theinquirer.net/?article=19729
    >>
    >>Wow and I thought 940 was a lot, IOW what AMD is going to do with all
    >>these extra pins!
    >
    > Second core, perchance?? Well, you know, all the extra power, ground,
    > and data, to keep the second core fed and happy. 8>.

    I don't see where the second core would cost any significant I/O.
    Power/ground, certainly.

    Actually, I'm still amazed at a 940pin package for $hundreds. Pins are
    expen$ive (30ish years ago we used 1800 pin modules, at six-figures apiece).

    --
    Keith
  6. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    GSV Three Minds in a Can <GSV@quik.clara.co.uk> wrote in message news:<uk6sbmGlK+mBFAYn@from.is.invalid>...
    > Second core, perchance?? Well, you know, all the extra power, ground,
    > and data, to keep the second core fed and happy. 8>.

    Well, they said they are going to be able to fit dual-cores within the
    existing S940. But with S940, you only have one memory controller
    feeding both cores. Maybe this next gen socket will allow for dual
    independent memory controllers too? Also likely by that time they'll
    be doing DDR2 too.

    Yousuf Khan
  7. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    Ed <not@here.com> wrote :

    > what AMD is going to do with all
    > these extra pins!

    DDR2 ?


    Pozdrawiam.
    --
    RusH //
    http://randki.o2.pl/profil.php?id_r=352019
    Like ninjas, true hackers are shrouded in secrecy and mystery.
    You may never know -- UNTIL IT'S TOO LATE.
  8. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    Ed wrote:

    > Wow and I thought 940 was a lot, IOW what AMD is going to do with
    > all these extra pins!

    http://www.amdzone.net/modules.php?name=Sections&req=viewarticle&artid=56
    http://www.amdzone.net/pics/cpus/dualcore/1stdemo/diagram.jpg

    <quote>
    Each CPU has a path to the system request interface, and through the
    crossbar switch shares the same memory controller, and 3 HyperTransport
    links. AMD feels that the single memory controller is able to handle
    both CPU cores. They also feel that the HyperTransport bandwidth
    provided by three full speed links is more than adequate. AMD documents
    reveal that perhaps a 10% drop in performance due to the shared components.
    </quote>

    Could AMD want each core to have its own memory controller?

    --
    Regards, Grumble
  9. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    yjkhan@gmail.com (ykhan) wrote:

    >GSV Three Minds in a Can <GSV@quik.clara.co.uk> wrote in message news:<uk6sbmGlK+mBFAYn@from.is.invalid>...
    >> Second core, perchance?? Well, you know, all the extra power, ground,
    >> and data, to keep the second core fed and happy. 8>.
    >
    >Well, they said they are going to be able to fit dual-cores within the
    >existing S940. But with S940, you only have one memory controller
    >feeding both cores. Maybe this next gen socket will allow for dual
    >independent memory controllers too? Also likely by that time they'll
    >be doing DDR2 too.

    You mean a 256-bit-wide memory interface? Wow.
  10. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    Bitstring <pan.2004.11.18.03.19.55.380930@att.bizzzz>, from the
    wonderful person keith <krw@att.bizzzz> said
    >On Wed, 17 Nov 2004 23:45:41 +0000, GSV Three Minds in a Can wrote:
    >
    >> Bitstring <8ranp09qmm5rfdkab6m3kfus05ck42urst@4ax.com>, from the
    >> wonderful person Ed <not@here.com> said
    >>>On 17 Nov 2004 09:51:04 -0800, yjkhan@gmail.com (ykhan) wrote:
    >>>
    >>>>http://www.theinquirer.net/?article=19729
    >>>
    >>>Wow and I thought 940 was a lot, IOW what AMD is going to do with all
    >>>these extra pins!
    >>
    >> Second core, perchance?? Well, you know, all the extra power, ground,
    >> and data, to keep the second core fed and happy. 8>.
    >
    >I don't see where the second core would cost any significant I/O.
    >Power/ground, certainly.

    Depends whether you want each core to have it's own memory controller &
    HT links I guess.

    >Actually, I'm still amazed at a 940pin package for $hundreds. Pins are
    >expen$ive (30ish years ago we used 1800 pin modules, at six-figures apiece).

    Economies of scale .. plus technical advances. Heck I remember when 64
    pin (DIL) packages for UARTs were expensive because they had to be
    ceramic, rather than plastic, and we were contemplating how to get past
    -that- barrier. 8>.

    --
    GSV Three Minds in a Can
    Outgoing Msgs are Turing Tested,and indistinguishable from human typing.
  11. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    chrisv <chrisv@nospam.invalid> wrote in message news:<gskpp0th0pqp1g00ofr66v31589h5eq5e1@4ax.com>...
    > >Well, they said they are going to be able to fit dual-cores within the
    > >existing S940. But with S940, you only have one memory controller
    > >feeding both cores. Maybe this next gen socket will allow for dual
    > >independent memory controllers too? Also likely by that time they'll
    > >be doing DDR2 too.
    >
    > You mean a 256-bit-wide memory interface? Wow.

    More like two independent 128-bit wide interfaces.

    Yousuf Khan
  12. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    On Thu, 18 Nov 2004 14:25:31 +0000, GSV Three Minds in a Can wrote:

    > Bitstring <pan.2004.11.18.03.19.55.380930@att.bizzzz>, from the
    > wonderful person keith <krw@att.bizzzz> said
    >>On Wed, 17 Nov 2004 23:45:41 +0000, GSV Three Minds in a Can wrote:
    >>
    >>> Bitstring <8ranp09qmm5rfdkab6m3kfus05ck42urst@4ax.com>, from the
    >>> wonderful person Ed <not@here.com> said
    >>>>On 17 Nov 2004 09:51:04 -0800, yjkhan@gmail.com (ykhan) wrote:
    >>>>
    >>>>>http://www.theinquirer.net/?article=19729
    >>>>
    >>>>Wow and I thought 940 was a lot, IOW what AMD is going to do with all
    >>>>these extra pins!
    >>>
    >>> Second core, perchance?? Well, you know, all the extra power, ground,
    >>> and data, to keep the second core fed and happy. 8>.
    >>
    >>I don't see where the second core would cost any significant I/O.
    >>Power/ground, certainly.
    >
    > Depends whether you want each core to have it's own memory controller &
    > HT links I guess.

    Other than that's not what AMD has been saying, another few hundred pins
    isn't enough to do what you propose. A single memory channel would take
    more than a hundred I/O.

    >>Actually, I'm still amazed at a 940pin package for $hundreds. Pins are
    >>expen$ive (30ish years ago we used 1800 pin modules, at six-figures
    >>apiece).
    >
    > Economies of scale .. plus technical advances. Heck I remember when 64
    > pin (DIL) packages for UARTs were expensive because they had to be
    > ceramic, rather than plastic, and we were contemplating how to get past
    > -that- barrier. 8>.

    Sure. Note that we're still in ceramic. ;-)

    --
    Keith
  13. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    On Thu, 18 Nov 2004 08:30:22 -0800, ykhan wrote:

    > GSV Three Minds in a Can <GSV@quik.clara.co.uk> wrote in message news:<uk6sbmGlK+mBFAYn@from.is.invalid>...
    >> Second core, perchance?? Well, you know, all the extra power, ground,
    >> and data, to keep the second core fed and happy. 8>.
    >
    > Well, they said they are going to be able to fit dual-cores within the
    > existing S940. But with S940, you only have one memory controller
    > feeding both cores. Maybe this next gen socket will allow for dual
    > independent memory controllers too?

    I don't buy it. Add up the pins. 1207 isn't enough. Also remember the
    articles about the "extra" port on the K8's memory switch, just sitting
    there ready for another core.

    > Also likely by that time they'll be doing DDR2 too.

    Ok, why more I/O?

    --
    Keith
  14. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    On Thu, 18 Nov 2004 14:25:31 +0000, GSV Three Minds in a Can
    <GSV@quik.clara.co.uk> wrote:
    >
    >Bitstring <pan.2004.11.18.03.19.55.380930@att.bizzzz>, from the
    >wonderful person keith <krw@att.bizzzz> said
    >>On Wed, 17 Nov 2004 23:45:41 +0000, GSV Three Minds in a Can wrote:
    >>> Second core, perchance?? Well, you know, all the extra power, ground,
    >>> and data, to keep the second core fed and happy. 8>.
    >>
    >>I don't see where the second core would cost any significant I/O.
    >>Power/ground, certainly.
    >
    >Depends whether you want each core to have it's own memory controller &
    >HT links I guess.

    No particularly good reason for each core to have their own HT links,
    that just complicates things for basically no improvement in
    performance. Even dedicated memory controllers are unlikely to be
    worthwhile, Intel has demonstrated quite clearly with their Xeons that
    two CPUs can share a single memory controller with very little loss in
    performance vs. AMD's NUMA design.

    Now, if they try to slap *4* cores on a single die, now that's another
    matter altogether.

    Still, my money is on those extra pins being almost entirely made up
    of power and grounding pins.

    -------------
    Tony Hill
    hilla <underscore> 20 <at> yahoo <dot> ca
  15. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    keith <krw@att.bizzzz> wrote:
    > Actually, I'm still amazed at a 940pin package for
    > $hundreds. Pins are expen$ive

    What was the old rule? A dime apiece? So?
    These things aren't wirewrapped & potted.
    AFAIK, current packaging is like ultra-high precision BGA

    > (30ish years ago we used 1800 pin modules, at six-figures apiece).

    Yeah, but you didn't make Millions of modules per year.
    Economies of scale!

    -- Robert
  16. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    In article <nOfnd.20718$fC4.4842@newssvr11.news.prodigy.com>,
    redelm@ev1.net.invalid says...
    > keith <krw@att.bizzzz> wrote:
    > > Actually, I'm still amazed at a 940pin package for
    > > $hundreds. Pins are expen$ive
    >
    > What was the old rule? A dime apiece? So?

    Purchased, socketed, manufactured and tested, closer to a dollar
    apiece.

    > These things aren't wirewrapped & potted.

    ;-)

    > AFAIK, current packaging is like ultra-high precision BGA

    X86 processors are still PGA. The marketing model still demands it.
    Certainly BGA would be cheaper.

    > > (30ish years ago we used 1800 pin modules, at six-figures apiece).
    >
    > Yeah, but you didn't make Millions of modules per year.
    > Economies of scale!

    No, not millions per year, but a hundred per machine. Of course they
    did get better at making them and the cost went down. What killed 'em
    was when it dropped to a half-dozen per machine. The economy of scale
    went all to hell. ;-)

    ~1K pins still amazes me. 1K balls, less so.

    --
    Keith
  17. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    On Fri, 19 Nov 2004 11:02:59 -0500 Keith R. Williams <krw@att.bizzzz>
    wrote in Message id: <MPG.1c07e340718a2b7998979a@news.individual.net>:

    >X86 processors are still PGA.

    Not entirely.
  18. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    In article <pr9sp05qnbj0u9mmrvkonb4cl624gtq4k3@4ax.com>, none@dev.nul
    says...
    > On Fri, 19 Nov 2004 11:02:59 -0500 Keith R. Williams <krw@att.bizzzz>
    > wrote in Message id: <MPG.1c07e340718a2b7998979a@news.individual.net>:
    >
    > >X86 processors are still PGA.
    >
    > Not entirely.

    Counter example? (though perhaps "socketed" is a better term)
  19. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    Keith R. Williams <krw@att.bizzzz> wrote:
    > Purchased, socketed, manufactured and tested, closer to a
    > dollar apiece.

    Maybe CPUs, but 74xx logic and DRAM was closer to the dime.


    >> AFAIK, current packaging is like ultra-high precision BGA
    > X86 processors are still PGA. The marketing model still
    > demands it. Certainly BGA would be cheaper.

    I was referring to how the dice are mounted on the PGA
    carrier. They're not laced in, but AFAIK more like BGA.

    > What killed 'em was when it dropped to a half-dozen per
    > machine. The economy of scale went all to hell. ;-)

    A victim of it's own success :)

    > ~1K pins still amazes me. 1K balls, less so.

    Hey, lift the cover on a ZIF socket -- familiarity breeds
    contempt :) AFAIK, PGA packages are made exactly the same:
    pot the pins in PCB, print a few layers (SMT otional)
    and microBGA the die on.

    -- Robert
  20. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    On Fri, 19 Nov 2004 23:06:19 +0000, Robert Redelmeier wrote:

    > Keith R. Williams <krw@att.bizzzz> wrote:
    >> Purchased, socketed, manufactured and tested, closer to a
    >> dollar apiece.
    >
    > Maybe CPUs, but 74xx logic and DRAM was closer to the dime.
    >
    You remember differently that I (a dime wouldn't have made me look again,
    though TTL stuff was only a dime a package, or less). I was always amazed
    at the costs, but then there may have been a lot of overhead thrown in
    there too (like me ;).
    >
    >>> AFAIK, current packaging is like ultra-high precision BGA
    >> X86 processors are still PGA. The marketing model still demands it.
    >> Certainly BGA would be cheaper.
    >
    > I was referring to how the dice are mounted on the PGA carrier. They're
    > not laced in, but AFAIK more like BGA.

    Oh, that. Yes, IBM's C4 process (Controlled Collapsible Chip Connection)
    from the '60s. ;-)
    http://www-306.ibm.com/chips/technology/makechip/interconnect/4.html
    It is rather like BGA, though "Chip-Scale" comes closer. ;-)

    >> What killed 'em was when it dropped to a half-dozen per machine. The
    >> economy of scale went all to hell. ;-)
    >
    > A victim of it's own success :)

    Moore done 'em in. It's a conspiracy, I tell ya'.

    >> ~1K pins still amazes me. 1K balls, less so.
    >
    > Hey, lift the cover on a ZIF socket -- familiarity breeds contempt :)
    > AFAIK, PGA packages are made exactly the same: pot the pins in PCB,
    > print a few layers (SMT otional) and microBGA the die on.

    Ziff *sockets* are a bit different. They remind me more of "a thousand"
    tiny bobby pins.

    --
    Keith
  21. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    In article <MPG.1c080b72f33c96cc98979b@news.individual.net>, Keith R.
    Williams <krw@att.bizzzz> writes

    >Counter example? (though perhaps "socketed" is a better term)

    Intel's new LGA Prescotts? (the pins are on the "socket")

    --
    ..sigmonster on vacation
  22. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    On Sun, 21 Nov 2004 14:07:59 +0000, Mike Tomlinson wrote:

    > In article <MPG.1c080b72f33c96cc98979b@news.individual.net>, Keith R.
    > Williams <krw@att.bizzzz> writes
    >
    >>Counter example? (though perhaps "socketed" is a better term)
    >
    > Intel's new LGA Prescotts? (the pins are on the "socket")

    Still a *socket* (and a right scarry one). Try again...

    --
    Keith
  23. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    keith <krw@att.bizzzz> wrote:
    > Ziff *sockets* are a bit different. They remind me more of
    > "a thousand" tiny bobby pins.

    They used to be that way. Now, ZIF sockets have pins
    with a half-moon "C" head to connect with the CPU pins.

    You cam probably comment better on PCB routing issues.
    That's a _lot_ of traces even after half the pins are
    tied to only ground or Vcc. Still, better than the
    equivalent Northbridge 'cuz no ~100 from the CPU!

    -- Robert
  24. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    On Fri, 19 Nov 2004 13:54:27 -0500 Keith R. Williams <krw@att.bizzzz>
    wrote in Message id: <MPG.1c080b72f33c96cc98979b@news.individual.net>:

    >In article <pr9sp05qnbj0u9mmrvkonb4cl624gtq4k3@4ax.com>, none@dev.nul
    >says...
    >> On Fri, 19 Nov 2004 11:02:59 -0500 Keith R. Williams <krw@att.bizzzz>
    >> wrote in Message id: <MPG.1c07e340718a2b7998979a@news.individual.net>:
    >>
    >> >X86 processors are still PGA.
    >>
    >> Not entirely.
    >
    >Counter example? (though perhaps "socketed" is a better term)

    Off the top of my head, Geode, AMD SC400, and Via's C3 come in BGA
    packages.
  25. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    On Mon, 22 Nov 2004 07:20:42 -0500, JW wrote:

    > On Fri, 19 Nov 2004 13:54:27 -0500 Keith R. Williams <krw@att.bizzzz>
    > wrote in Message id: <MPG.1c080b72f33c96cc98979b@news.individual.net>:
    >
    >>In article <pr9sp05qnbj0u9mmrvkonb4cl624gtq4k3@4ax.com>, none@dev.nul
    >>says...
    >>> On Fri, 19 Nov 2004 11:02:59 -0500 Keith R. Williams <krw@att.bizzzz>
    >>> wrote in Message id: <MPG.1c07e340718a2b7998979a@news.individual.net>:
    >>>
    >>> >X86 processors are still PGA.
    >>>
    >>> Not entirely.
    >>
    >>Counter example? (though perhaps "socketed" is a better term)
    >
    > Off the top of my head, Geode, AMD SC400, and Via's C3 come in BGA
    > packages.

    AIUI, Geode and SC400 are embedded processors, so BGA makes sense. C3 is
    an interesting case, but I suspect it's there for the embedded market as
    well.

    --
    Keith
  26. Archived from groups: comp.sys.ibm.pc.hardware.chips (More info?)

    On Mon, 22 Nov 2004 06:21:23 +0000, Robert Redelmeier wrote:

    > keith <krw@att.bizzzz> wrote:
    >> Ziff *sockets* are a bit different. They remind me more of
    >> "a thousand" tiny bobby pins.
    >
    > They used to be that way. Now, ZIF sockets have pins
    > with a half-moon "C" head to connect with the CPU pins.

    That's what I mean by a "bobby pin", though perhaps with less of a tail.
    The CPU pin goes into the center and is then cammed over into place with
    spring tension of the 'C' tails making contact.

    > You cam probably comment better on PCB routing issues. That's a _lot_ of
    > traces even after half the pins are tied to only ground or Vcc. Still,
    > better than the equivalent Northbridge 'cuz no ~100 from the CPU!

    I haven't looked at the pinout of one of these monsters, but I suspect
    they've thought of that. The densest chip I've routed personally (I don't
    do that stuff too often any more) was a Xininx FG680. The pinout was
    designed to make routing rather simple. The inner balls were dedicated to
    power/ground/references with the I/O mostly around the edge in the last
    three/four rows so it could be routed out in two layers with two traces
    per channel. I had five signal layers (though only three 50 ohm layers),
    so routing wasn't too much of a problem. I suspect the Tiawaneese board
    designers are more clever. ;-)

    --
    Keith
Ask a new question

Read More

CPUs Hardware Next Generation IBM Opteron