Sign in with
Sign up | Sign in
Your question
Solved

Help with Memory Interface and clock speed

Last response: in Graphics & Displays
Share
May 27, 2014 6:05:22 PM

I'm confused on how the GPU works with clock speed and memory interface.

Example:
http://www.newegg.com/Product/Product.aspx?Item=N82E16814487003&cm_re=780ti-_-14-487-003-_-Product

How does the 384-bit work with the 7000 Mhz?

Do you do 7000 * 1000 * 1000 (Convert to Hz) = 7,000,000,000 * 384 = 2,688,000,000,000 bits of data per second. Since the card has a memory capacity of 3 GB and 2.688 trillion bits = 2.32 GB per second, the card will take 1431 milliseconds to clear it's memory? I'm kind of lost.

Also why is there a different memory clock and core clock?


Edit: I read an article and I sort of understand now.
http://www.playtool.com/pages/vramwidth/width.html

How do you get 6.4 GB/s from "For example, a video card with 200 MHz DDR video RAM which is 128 bits wide has a bandwidth of 200 MHz times 2 times 128 bits which works out to 6.4 GB/s."

Best solution

a b } Memory
May 27, 2014 6:38:37 PM

You calculated the 2.688 trillion bits/s correctly, but you seem to have gone a bit wonky when converting to GB/s.

For the example you picked:

(7000 x 384) / 8 = 336000 MB/s = 336 GB/s (the bandwidth, as listed in the specifications for that card)

7000 is the speed of the memory in MHz
384 is the bus width in bits
8 is due to the fact there are 8 bits in a byte

An alternative example, taken from a reference R9-290X (which has slower memory, but a wider bus):

(5000 x 512) / 8 = 320000 MB/s = 320 GB/s


As for the question of why there are different clocks for the memory and core, well they're two completely separate things; there's no need for them to be the same or anything.

EDIT: just seen your edit. Give me just a sec, and I'll answer your new question for you.

EDIT 2: For that example the 200 MHz speed has to multiplied by 2 because it's DDR RAM which stands for double data rate. So it becomes:

(400 x 128) / 8 = 6400 MB/s = 6.4 GB/s
Share
May 27, 2014 6:56:33 PM

Damn_Rookie said:
You calculated the 2.688 trillion bits/s correctly, but you seem to have gone a bit wonky when converting to GB/s.

For the example you picked:

(7000 x 384) / 8 = 336000 MB/s = 336 GB/s (the bandwidth, as listed in the specifications for that card)

7000 is the speed of the memory in MHz
384 is the bus width in bits
8 is due to the fact there are 8 bits in a byte

An alternative example, taken from a reference R9-290X (which has slower memory, but a wider bus):

(5000 x 512) / 8 = 320000 MB/s = 320 GB/s


As for the question of why there are different clocks for the memory and core, well they're two completely separate things; there's no need for them to be the same or anything.

EDIT: just seen your edit. Give me just a sec, and I'll answer your new question for you.

EDIT 2: For that example the 200 MHz speed has to multiplied by 2 because it's DDR RAM which stands for double data rate. So it becomes:

(400 x 128) / 8 = 6400 MB/s = 6.4 GB/s


Wow thanks, that helped me out a lot!

Were you suppose to add the GDDR5 multiplier to
"7000 is the speed of the memory in MHz
384 is the bus width in bits
8 is due to the fact there are 8 bits in a byte"?

(7000 * 5 * 384) / 8 = 1680000 MB/s = 1680 GB/S?

Edit: Ah, 2.688 trillion was supposed to be bytes not bits. Forgot to divide by 8. :fou: 

Edit 2: Also, how does core clock correlate with memory clock? If you overclock the core clock, does it impact your memory clock? If so, how?

Edit 3: Sorry about the amount of edits I'm making, but I'm just curious on what data is stored in the VRAM. I read that the data processed by the GPU is stored in the VRAM as a frame buffer, but what is this frame buffer? Pixel colors? Vectors? Textures?
m
0
l
Related resources
a b } Memory
May 27, 2014 7:03:47 PM

The GDDR5 multiplier as it were has already been applied; the actual memory speed is 1750 MHz. Unfortunately I can't for the life of me remember why it gets multiplied by 4 (as opposed to 2), but it does. You see the 1750 MHz reported in any monitoring program, like GPU-Z.

EDIT: Whoops once again; just saw your edit :D 

EDIT 2: No, the two clocks are completely separate. You can overclock each one separately.
m
0
l
May 27, 2014 7:10:58 PM

Damn_Rookie said:
The GDDR5 multiplier as it were has already been applied; the actual memory speed is 1750 MHz. Unfortunately I can't for the life of me remember why it gets multiplied by 4 (as opposed to 2), but it does. You see the 1750 MHz reported in any monitoring program, like GPU-Z.

EDIT: Whoops once again; just saw your edit :D 

EDIT 2: No, the two clocks are completely separate. You can overclock each one separately.


Ohh GDDR5 is quad data rate? Whoops.

GDDR4 = 3
GDDR3 = 2?

DDR2 = 3?

I'm confused :p 

Edit: Oh there is no GDDR4. So GDDR 3 is 3x data rate as well as DDR2? DDR3 is 4x?

m
0
l
a b } Memory
May 27, 2014 7:23:58 PM

Sorry, I can't remember precisely why you have to multiply it by 4 sadly (I'm not sure I ever even new the real reason :p ). I think it's down to something quite fundamental in how it works that leads to it transmitting 4 bits per clock cycle.

Just to add, GDDR5 is just modified DDR3; GDDR5 has better bandwidth, but worse latency. DDR3 is simply double data rate; the reported speed is half of the actual speed (so 800 MHz for 1600 MHz RAM).
m
0
l
May 27, 2014 7:28:32 PM

Damn_Rookie said:
Sorry, I can't remember precisely why you have to multiply it by 4 sadly (I'm not sure I ever even new the real reason :p ). I think it's down to something quite fundamental in how it works that leads to it transmitting 4 bits per clock cycle.

Just to add, GDDR5 is just modified DDR3; GDDR5 has better bandwidth, but worse latency. DDR3 is simply double data rate; the reported speed is half of the actual speed (so 800 MHz for 1600 MHz RAM).


It happens 4 times per cycle right? Isn't that why you multiply it by 4?

Why do you multiply the core clock by 2 to get memory and then 4 again to get the effective memory clock? That turns out to be a 6x multiplier.

What do you mean by worse latency?
m
0
l
a b } Memory
May 27, 2014 7:37:41 PM

By worse latency, I just mean worse than plain DDR3. It's a compromise to increase bandwidth, as that's more important in graphics usage (hence why DDR3 versions of graphics cards always perform worse than GDRR5 versions).

I'm a little confused by your comment about core clock. Are you referring to the core clock on a graphics card?
m
0
l
May 27, 2014 7:55:32 PM

Damn_Rookie said:
By worse latency, I just mean worse than plain DDR3. It's a compromise to increase bandwidth, as that's more important in graphics usage (hence why DDR3 versions of graphics cards always perform worse than GDRR5 versions).

I'm a little confused by your comment about core clock. Are you referring to the core clock on a graphics card?


Yes I am, the 780 Ti has a core clock of 876MHz, why do you multiply the core clock by 2 to find the memory clock? 876 * 2 = 1752; isn't the memory clock 1752?


Or am I doing this wrong? I'm just trying to figure out how you get 7000 effective memory clock from the core clock.
m
0
l
a b } Memory
May 27, 2014 7:58:33 PM

No, the core clock and memory clock are completely separate. The fact one is close to half the other is mere coincidence.
m
0
l
May 27, 2014 7:59:43 PM

Damn_Rookie said:
No, the core clock and memory clock are completely separate. The fact one is close to half the other is mere coincidence.


Ooh now I understand. Thanks!
m
0
l
a b } Memory
May 27, 2014 8:04:24 PM

No worries, glad to help :)  I wish I had more to impart, but I've already pushed the limits of my current knowledge with what I've said :D 
m
0
l
May 27, 2014 8:07:10 PM

Damn_Rookie said:
No worries, glad to help :)  I wish I had more to impart, but I've already pushed the limits of my current knowledge with what I've said :D 


This really helped me, I can actually see why some graphics cards are better than others now.

m
0
l
!