Sign in with
Sign up | Sign in

Nvidia GT218 Card & Specs Possibly Surfacing

By - Source: Tom's Hardware US | B 18 comments

Technical drawings and specifications for the upcoming Nvidia GT218 cards seem to be surfacing slowly – possibly more information to follow at CeBIT this coming March and a possible public launch in April.

The GT218 will be the first 40nm based GPU rolling out from Nvidia and word has it there are at least 4 SKUs on different PCB designs (something that has yet to be 100% confirmed). The following image was scooped up by outside sources and gives us a glimpse of what the hardware might actually look like. It is based on the P692 PCB design codenamed D10M1-30. Core clock rate is 550MHz and the shader clock comes in at 1375MHz while memory is expected to be introduced at 512MB DDR3 at 800MHz on a 64bit interface. The exact number of shader processors has not been revealed as of current however the card is expected to have typical support for Dual Link DVI, DisplayPort, VGA and all the other common things.

As we can see from the image above, this PCB design is low-profile and from what we gather in the technical drawing, a low-profile adapter will accommodate or at least be available so that the card could be installed into a low-profile Home Theater PC (HTPC) or other small form factor chassis including 2U height boxes with cards that install vertically. We can also see that connector “J2” on the backplane, above the DVI connector could be an S-Video output, which makes a common appearance on low profile cards.

This is the only technical drawing we were able to find, and it is based only on one PCB design, there are apparently more – hopefully they will surface sooner than CeBIT in March and we can give you more insight.

Discuss
Ask a Category Expert

Create a new thread in the News comments forum about this subject

Example: Notebook, Android, SSD hard drive

This thread is closed for comments
  • 0 Hide
    timaahhh , February 6, 2009 1:19 AM
    512 MB at 64bits? Is there a point to that? Why would you not buy a lower end card from a previous generation?
  • 0 Hide
    Mr_Man , February 6, 2009 1:24 AM
    I thought the GTX 2xx naming scheme was supposed to make things less confusing... where do they get the idea for GT218?
  • 3 Hide
    alvine , February 6, 2009 1:55 AM
    nvidia naming fail
  • Display all 18 comments.
  • 2 Hide
    Tindytim , February 6, 2009 2:39 AM
    Mr_ManI thought the GTX 2xx naming scheme was supposed to make things less confusing... where do they get the idea for GT218?


    The GTX 280, 260, 285, and 295 are all based on the GT200 core. This is a new core, the GT218.
  • 0 Hide
    megamanx00 , February 6, 2009 3:18 AM
    Guess this will have lower memory bandwidth than the 4550 so I suppose it may be a 4350 competitor. Either way it won't exactly be a gaming card ^_^.
  • 0 Hide
    eklipz330 , February 6, 2009 3:28 AM
    this just in:

    nVidia's NOT renaming this time...

    with all due respect, i understand taking the the older generation, renaming, and possibly lowering its hierarchy and price to fit the naming scheme making room for newer cards... but renaming, and increasing the price just isn't nice
  • 0 Hide
    ubergeetar , February 6, 2009 4:29 AM
    I dont get it... It's a new core, with ZERO improvements on memory, core speed, etc. Even if it has more processors, wouldnt the 64-bit interface would really slow things down? Why take such a huge step back from DDR3 512bit?
  • 0 Hide
    stridervm , February 6, 2009 5:02 AM
    I think the idea is for nVidia to make careful, but baby steps regarding it's new video card.... Process.

    If you remember they only just recently relesed their 55nm video cards, and then jumping into another lower process, that takes guts as it's a real gamble. I think they're just playing safe.
  • 3 Hide
    nottheking , February 6, 2009 5:36 AM
    Uber, this is a lower-end card. While very wide memory interfaces are nice, there are some extra costs they bring that cannot simply be taken away through revisions to smaller fabrication processes.

    Basically, the wider the memory interface, the more pins your package needs, and the more interconnects inside the GPU package has to have, all of which require so much edge space. Basically, looking at what GPUs I've been able to gather data on, the lower-end of die size necessary for a given memory interface width is in the neighborhood of the following:
  • 128-bit: 100 mm²
  • 256-bit: 196 mm²
  • 512-bit: 420 mm²
    Bigger die sizes equals a greater liklihood of having a chip be bad, and fewer chips cut from a wafer to begin with, resulting in greatly increased prices. This is why, in spite of the advantages in performance it'd bring, no one moves their entire lineup to use wider memory interfaces: it'd require bigger chips.

    Furthermore, the wider the interface, the more RAM chips you need to actually use it. I believe a minimum of 1 DRAM chip per 32 bits of interface width is standard for video cards; hence, a 512-bit interface requires a whopping 16 DRAM chips; not good for prices.

    Basically, I'm guessing this will probably pack around 8 ROPs, 16 or 32 TMUs, and 32 or 64 stream processors; it will be a low-end part, probably designed, yes, to compete with the Radeon 4550. Looking at it, I'd say that the reason behind these decisions is that their recent beatings have forced nVidia to a more conservative ground, where they're making their first test with a product that will cost them very, very little to make, and will have a volume market that their traditional flagships do not, and could hopefuly restore them to profitability.
  • 0 Hide
    nottheking , February 6, 2009 5:38 AM
    Bleh, I failed up there, assuming these comments followed the same code as the forums... And no "preview comment" option. Just ignore the ugly tags; it was supposed to be bullet-points, obviously.
  • 0 Hide
    hannibal , February 6, 2009 8:47 AM
    Actually it is really a time to Nvidia to make some new low end cards allso! Their old very low end cards are based an really old design. Nice to see some new competition allso in the low end!
    Just hope that this will be at least as good HTPC card as ATI have! NVidias new 285 has proven to be reasonable energy effecient in high end sector. Maybe this time allso in low end.
  • 0 Hide
    liemfukliang , February 6, 2009 11:15 AM
    I hope the onboard version of this VGA is coming soon :D 
  • 2 Hide
    gwolfman , February 6, 2009 1:52 PM
    Quote:
    ...40nm based GPU rolling out from Nvidia...
    Yea! The smaller the better! Though she might not agree.......... ;) 
  • 0 Hide
    goonting , February 6, 2009 3:31 PM
    hmmm more powerful than onboard gpu?
  • 0 Hide
    ubergeetar , February 6, 2009 4:58 PM
    Okay, well if it's a lower-end card, then more power to NVidia for taking slow steps. This is probably a better approach, instead of just slamming out a super-powered card at the 40nm where they could mess up.
  • 0 Hide
    Anonymous , February 6, 2009 5:48 PM
    megamanx00Guess this will have lower memory bandwidth than the 4550 so I suppose it may be a 4350 competitor. Either way it won't exactly be a gaming card ^_^.

    I'd have to say that a Radeon 4830 seems like a pretty good gamingcard to me!
    It can basically play every game out there except the latest most advanced.
    I remember when my Riva TNT32 card was in for renewal, how much different a Radeon 9500 was back then!
    I can only imagine the 4300 to be more powerful; and perfectly suited for most games!
    This newer Nvidia card with 40nm design might be a good initiative to have a low power consumption card available in the Nvidia lineup.
    Though so far I'll stick with 50nm ATI cards!
    For me the casual gamer, a 4830 or 4670 seem like good enough, cost effective and powersaving cards!
  • 0 Hide
    SneakySnake , February 6, 2009 9:03 PM
    Ya, the 4830 is a great card, equal to or better then the 8800/9800
  • 0 Hide
    goonting , February 7, 2009 5:52 PM
    HD 4670 is a good buy for us having 1680x1050 resolutions