Nvidia Use GDDR 3 because they don't need to use anything different ...Yet. When they built the original 8800 series they made a very good architecture that hasn't been challenged enough by the competition for Nvidia to feel they need to change. Change costs money so as long as they can keep every thing the same and compete why spend out on refurbing to make a different chip.
Your second point is kinda back to front Nvidia have been doing it this way for quite a while now, its ATI who are the ones who are trying to find a way of bettering Nvidia (It seems to be working )
ATI tried a bigger bus on the 2900 series and it proved to be both in efficient and yes costly so on both these counts they decided to concentrate on improving the throughput of the actual chip and made the bus smaller as the larger bus just wasn't giving a boost worth the expense of the extra silicon.
Another possible reason for the differences could be down to the whole DX10 debacle when MS changed the goal posts of what was required within DX10 to suite Nvidia and ATI were building a card to cater for the full specs, but that another whole thread that's been done before.
Bigger memory buses won't change much as the throughput is already good enough. The main difference is in the number of transistors - Nvidia's generally have about 50% more than ATI's on their top end products. The ATI r700 has ~950 million while the Nvidia g200 has 1.4 billion. The ATI's are better chips but the Nvidias generally win with sheer brute force. The drawback is, Nvidia are leaking cash with this approach while ATI would be making cash if AMD weren't dragging everything down because of debt.
If ATI increase the transistor count with the r800 then Nvidia could be in real trouble by the end of this year. I think they probably will but who knows.