why only over 1920x1200? I heard the basic difference between the two types of slots (pcie x16 and pcie 2.0) is the increase of voltage from 1.0 to 2.0, so should'nt that uniformly decrease the performance of a pcie2.0 card making it equal to that of a similar pcie x16 card ?
There's no change in voltage. 2.0 has double the data rate of 1.0, so a x8 2.0 slot will be the same as a x16 1.0 slot. x16 1.0 has a data rate of 4 GB/s, which is more than enough for any game running at resolutions below 1920x1200. At higher resolutions the system needs to feed more textures to the card more often, and that's where the extra bandwidth starts to make a difference.
But that's only if the card can make use of the extra textures fast enough in the first place, and I don't think the 8800GT could. The difference between 1.0 and 2.0 will get more noticeable in the future, when cards can burn through more texture data at a faster pace.
The 9800 GX2 doesn't start losing performance until you reduce the PCI-E 2.0 link to x4, which is the same as a x8 1.0 link. So an x16 1.0 link won't deliver any less performance than an x16 2.0 link.