On my other Forum I was finding out which Video Card would be the most bang for the buck for my computer. With the help of some of very nice and knowledgable people I narrowed down the selections to 3 choices...
MSI Radeon 4870 1 GB with GDDR5
ATi Radeon 4770 with GDDR5
MSI nVidia GTX 260 Core 216 with GDDR3
Is there a big difference between the GDDR3 and the GDDR5?
Would the GDDR5 really make the difference to the 4870 to out perform the GTX 260 with only the GDDR3?
The main difference is bandwith and speed. The ATI cards use faster memory, but NVIDIA uses a larger data bus (which can carry more data at one time).
That being said, in a game that uses a lot of GPU RAM, the ATI cards, with their faster RAM speed, will usually gain an edge, as they will need to access that RAM more often. For other games that don't store as much in RAM, NVIDIA would gain the edge, due to being able to load and execute more data without having to store it into its (comparativly) slower RAM.
Ok, I understand now. The GDDR3 is a little bit slower but with the extra memory it alows it to load a large amounts of data a litle quicker, but the GDDR5 loads smaller amounts even faster. Im going to guess that for my kind of gaming (MMO's [WoW, Guild Wars, EQII]) I would need a larger amount of memory to load whole areas of the map/Zones. But if I was planning on running something like COD4 or a shooter that only loads 1 level at a time I would be better off running the GDDR5? Is this correct?
It really isn't "extra" memory; its really an issue of the data bus (how long a string of 0/1's you can read in a single LOAD operation).
ATI's idea is to use a smaller data bus, so less data is loaded per LOAD operation. Any data that is not executed immediatly is stored in its faster RAM, which can be accessed quicker then its NVIDIA counterpart. The downside, is that as the GPU can physically load less data per cycle, is a chance of the GPU executing the data far faster then it can receive it, leading to a situation where most of its power is going to waste.
NVIDIA's idea is to use a larger data bus. Even though this results in more data having to be stored in RAM (due to more data having to be executed by GPU cycle), you save most of that overhead back thanks to not having to access the rest of the system for data as often. The downside with this method, is if you are loading more data then the GPU can handle, all the unexecuted data will get stuffed into the "slower" GPU RAM until the GPU gets a chance to get around to it.
When doing something that involves accessing a lot of data from the system, NVIDIA comes out on top, thanks to being able to load more data per LOAD operation. When accessing data from the GPU RAM, ATI gains the edge due to faster RAM speeds. In real-world usage, it evens out, although I suspect NVIDIA would be far better suited for mathematical purposes then ATI, and would REALLY be interested to see how much better NVIDIA does with something like PhysX compared to ATI (maybe thats the reason they refuse to port it?)