Uber, this is a lower-end card. While very wide memory interfaces are nice, there are some extra costs they bring that cannot simply be taken away through revisions to smaller fabrication processes.
Basically, the wider the memory interface, the more pins your package needs, and the more interconnects inside the GPU package has to have, all of which require so much edge space. Basically, looking at what GPUs I've been able to gather data on, the lower-end of die size necessary for a given memory interface width is in the neighborhood of the following:
■128-bit: 100 mm²
■256-bit: 196 mm²
■512-bit: 420 mm²
Bigger die sizes equals a greater liklihood of having a chip be bad, and fewer chips cut from a wafer to begin with, resulting in greatly increased prices. This is why, in spite of the advantages in performance it'd bring, no one moves their entire lineup to use wider memory interfaces: it'd require bigger chips.
Furthermore, the wider the interface, the more RAM chips you need to actually use it. I believe a minimum of 1 DRAM chip per 32 bits of interface width is standard for video cards; hence, a 512-bit interface requires a whopping 16 DRAM chips; not good for prices.
Basically, I'm guessing this will probably pack around 8 ROPs, 16 or 32 TMUs, and 32 or 64 stream processors; it will be a low-end part, probably designed, yes, to compete with the Radeon 4550. Looking at it, I'd say that the reason behind these decisions is that their recent beatings have forced nVidia to a more conservative ground, where they're making their first test with a product that will cost them very, very little to make, and will have a volume market that their traditional flagships do not, and could hopefuly restore them to profitability.