Whats the Difference? GDDR3 and GDDR5?

On my other Forum I was finding out which Video Card would be the most bang for the buck for my computer. With the help of some of very nice and knowledgable people I narrowed down the selections to 3 choices...

MSI Radeon 4870 1 GB with GDDR5

ATi Radeon 4770 with GDDR5

MSI nVidia GTX 260 Core 216 with GDDR3

Is there a big difference between the GDDR3 and the GDDR5?

Would the GDDR5 really make the difference to the 4870 to out perform the GTX 260 with only the GDDR3? :heink:
6 answers Last reply
More about whats difference gddr3 gddr5
  1. Well DDR5 has very high data bandwidth compared to the DDR3...
    But there is a catch with those graphics cards...

    In a 4870, it has a 256-bit memory interface and DDR5
    4770 has 128-bit memory interface and DDR5
    and GTX 260 has 448-bit memory interface and DDR3

    So finally if you see the 4870 and the GTX 260 would have total data bandwidth nearly the same because of the higher memory interface of the GTX 260...

    For more info check this...
    http://www.anandtech.com/video/showdoc.aspx?i=3341
  2. ^+1 In simple words :P
  3. It seems to me that what matters most is the OP's monitors resolution and his budget for a gfx card .

    Technical specs have very little meaning , but real world performance does . Whats important is frame rates , at the quality you want for the games you want to play.


    Of the OP's three listed card the 4870 and gtx 260 have very similar performance , despite the different RAM , and the 4770 is a lower performing but still pretty good card .
  4. The main difference is bandwith and speed. The ATI cards use faster memory, but NVIDIA uses a larger data bus (which can carry more data at one time).

    That being said, in a game that uses a lot of GPU RAM, the ATI cards, with their faster RAM speed, will usually gain an edge, as they will need to access that RAM more often. For other games that don't store as much in RAM, NVIDIA would gain the edge, due to being able to load and execute more data without having to store it into its (comparativly) slower RAM.
  5. Ok, I understand now. The GDDR3 is a little bit slower but with the extra memory it alows it to load a large amounts of data a litle quicker, but the GDDR5 loads smaller amounts even faster. Im going to guess that for my kind of gaming (MMO's [WoW, Guild Wars, EQII]) I would need a larger amount of memory to load whole areas of the map/Zones. But if I was planning on running something like COD4 or a shooter that only loads 1 level at a time I would be better off running the GDDR5? Is this correct?
  6. It really isn't "extra" memory; its really an issue of the data bus (how long a string of 0/1's you can read in a single LOAD operation).

    ATI's idea is to use a smaller data bus, so less data is loaded per LOAD operation. Any data that is not executed immediatly is stored in its faster RAM, which can be accessed quicker then its NVIDIA counterpart. The downside, is that as the GPU can physically load less data per cycle, is a chance of the GPU executing the data far faster then it can receive it, leading to a situation where most of its power is going to waste.

    NVIDIA's idea is to use a larger data bus. Even though this results in more data having to be stored in RAM (due to more data having to be executed by GPU cycle), you save most of that overhead back thanks to not having to access the rest of the system for data as often. The downside with this method, is if you are loading more data then the GPU can handle, all the unexecuted data will get stuffed into the "slower" GPU RAM until the GPU gets a chance to get around to it.

    When doing something that involves accessing a lot of data from the system, NVIDIA comes out on top, thanks to being able to load more data per LOAD operation. When accessing data from the GPU RAM, ATI gains the edge due to faster RAM speeds. In real-world usage, it evens out, although I suspect NVIDIA would be far better suited for mathematical purposes then ATI, and would REALLY be interested to see how much better NVIDIA does with something like PhysX compared to ATI (maybe thats the reason they refuse to port it?)
Ask a new question

Read More

Graphics Cards Graphics