Asian PingPong :
This is false considering latency is measured in the amount of clock cycles, not milliseconds, so the more clock cycles per second, the less time between clock cycles, thus translating into less latency.
First, there appears to be alot of confusion between the relationships of latency, clock speed, time, and performance of DDR(1,2,3) and GDDR(1,2,3,4,5).
Go here:
http://www.thetechrepository.com/showthread.php?t=160
Now, with this newfound knowledge, let's look at 2 speeds....(I'll use only the grey boxes for this example).
DDR-1000(DDR2) spec is CAS4(8.0ns). DDR-1333(DDR3) spec is CAS6(8.99ns).
Now some people are gonna go "What!? 8 ns < 8.99ns, so how is DDR3 faster when it has more latency!
The answer is simple. The prefetch buffer is 4 bits deep for DDR2, but 8 bits deep for DDR3. That means 4 bits for DDR2 transfer, and 8bits for DDR3. We lose 0.99ns, but we gain twice the amount of data. Make sense?
Now, some of us will remember way back to when DDR2 first came out. Some applications ran faster with DDR than DDR2 in testing. This was because many programs were pulling data from many different locations and only using 1 or 2 bits of data. So even though you might have had "faster DDR2", you could be slower because you'd receive 4 bits of data at a slightly higher latency, only use 2 bits(of the 4bits), and the other data was extraneous and ignored. Latency was all that mattered. Of course, now that DDR3 is coming about, this problem is somewhat rearing its ugly head again. Some programs are hard coded to work with certain bit sizes to maximize the RAM performance, others aren't. This is why some benchmarks show DDR3 slightly slower than DDR2, and others show the opposite.
Overall, DDR3 is the future, because it can transfer more data per unit time.
Sidenote: Ever seen those cool PC ratings like PC2-5300, PC3-14400.
The PC shows Personal Computing, then the DDR type(in this case DDR2 or DDR3), and the last number shows the MB/sec performance for the bus(in this case 5.3GB and 14.4GB/sec).
GDDR family isn't really related to the DDR family. GDDR is designed for use for video processing cards, and DDR is more for general system memory. I remember reading back in the mid-90s when GDDR came about, the only significant difference was that GDDR can read and write to the same block of memory at the same time. I can't find that info on google though.
GDDR > DDR for video processing. DDR > GDDR for system processing.
To answer the OP, there are differences, and they are quite complex. What you need to know is that each generation is supposed to be superior to the previous. You are probably wanting to know if you should buy a video card that has GDDR2 or GDDR3. GDDR3 is better than GDDR2. What kind of performance difference will you see is hard to ascertain. I couldn't find anything on google that shows hard fast benchmarks for comparing the 2 types of memory performance-wise.
Sorry for the long post. I felt that a little good home schooling would help out everyone in this thread.