Before I get to my conclusion, I want to state that over-clocking is not for everyone. You can take some serious risks by over-clocking core settings and may damage your hardware. On top of this risk, it's very possible that you won't see a problem until you're in the middle of that important spreadsheet or word document and your machine freezes before you have a chance to save. Make sure your not going to be using your system for any critical tasks while over-clocking the video solution. We also recommend you take careful measures to properly cool your hardware. Please note that not every video board has the same margins in their components for running them out of specification. A great example of this is the difference in over-clockability between the SDR and DDR boards we used for testing. The SDR allowed for a higher core speed (+10 MHz) than the DDR board.
The findings in this overclocking article have more of a scientific than a practical value to most of us. The gains achieved by overclocking GeForce256 are rather little, mainly due to the fact that the possible increase of the clock rates is rather small as well. It doesn't come as a surprise that overclocking the memory of a SDR-GeForce will help in many cases, especially at high resolutions. Cranking up the core clock makes most sense on DDR-boards.
What made this article very interesting to me was the impact of memory bandwidth-alterations of the DDR-GeForce. First of all I was surprised to find out that the DDR-board has an actual lower memory clock than the SDR-board. Doubling the 150 MHz gets you to '300 MHz', which is still way beyond 166 MHz though. However, you remember that I've criticized GeForce's memory interface in my first article and the results we saw after overclocking the memory of the DDR-GeForce seem to prove my point. Especially in Quake3 at the high quality setting the increase in memory clock translated into an increase in frame rate at the same percentage. This shows that even DDR-memory cannot quite compensate the shortcomings of GeForce's memory interface. I was told by NVIDIA that a 256 bit wide memory interface was close to unrealizable, but I am sure that this is what it will take for future 3D-chips. The only alternative could be the much beloved RDRAM.