Core Clock / Shader Clock / Memory Clock

Eviscerare

Distinguished
Oct 18, 2011
43
0
18,530
Okay so my question about the three different clocks is simply: what exactly do they do in terms of graphics performance in all aspects. For example, which will increase fps in a game more than the others, (maybe a combination of all three is the best), what does each of the clocks affect during a game, which increases heat of the GPU more than the others, which usually can be overclocked the most, etc.

Also, given an video card with (arbitrary numbers):
10 MHz Core Clock
20 MHz Shader Clock
30 MHz Memory Clock

What's a good general method of step increases when finding a stable overclock. Ex. steps of 3 MHz Core Clock, 6 MHz Shader Clock, and 10 MHz Memory Clock.

PS. Like previously mentioned, I'm looking for the actual details of what each does. Not something like "core clock is like CPU clock but for GPU"
 
Solution
Shaders are different from shadows.

Lets go back in time for a bit. Back in the day when modern video cards were first being used they had one pixel shader on a single (or more) pixel pipeline. As I hope you know your screen is made up of a bunch of dots/pixels that are one of three colors. Each of those dots needs to be "shaded" the correct color. At first we had single pipelines, then dual, and eventually many more. AMD with the x1900 series of cards even started putting 3 shaders on each pipeline (16) so they could do even more work.

There are other types of shaders, but the math can be very similar. That's what lead to DX10, which calls for a "unified" shader architecture so that you don't have pixel shaders sitting around...
Actual details of what each does? Google or wiki mean anything to you?

The core clock is what speed the main parts of the chip work at. The shader clock is the speed at which the shaders (and in the case of Nvidia, some other parts of the chip) run at. For Nvidia, this is always twice the speed of the core clock. For AMD, it's the same as the core clock, at least so far. The memory clock is the speed of the memory chips on the card and is a part of the memory bandwidth equation.

I'm not sure what you mean by "actual details". Check out some wiki articles on GPUs if you want to know what a GPU core does.

As for OCing, it depends on the card. Some are memory bound and increasing the memory clock will help a lot. Most are core bound, and increasing the core speed will help. I think Nvidia cards are tied, meaning if you increase the core 10MHz, the shader clock will go up by 20. Again, AMD cards haven't untied the shaders from the core so it doesn't matter.
 
Yea I already tried Google but most of the results weren't very helpful. I'll try finding a wiki on GPUs later.

Sorry about being a bit vague, but I guess what I meant was simply what are shaders? Are they what renders shadows in games? Or is that just a coincidental name similarity.

Thanks.

And yes Nvidia cards have them linked for the most part. I think only a few mobile cards have the ability to unlink and raise separately.
 
Shaders are different from shadows.

Lets go back in time for a bit. Back in the day when modern video cards were first being used they had one pixel shader on a single (or more) pixel pipeline. As I hope you know your screen is made up of a bunch of dots/pixels that are one of three colors. Each of those dots needs to be "shaded" the correct color. At first we had single pipelines, then dual, and eventually many more. AMD with the x1900 series of cards even started putting 3 shaders on each pipeline (16) so they could do even more work.

There are other types of shaders, but the math can be very similar. That's what lead to DX10, which calls for a "unified" shader architecture so that you don't have pixel shaders sitting around doing nothing because the screen at that moment needed more vertex shaders.

Short answer, shaders have nothing to do with shadow, but they are the things that do all the math needed to render the screen.
 
Solution
From what I've understood in my 2 weeks or so of overclocking, The core clock is the speed at which the GPU churns out polygons or pixels or whatever. The memory clock is the speed at which data travels to and from the card. I have an ATI, so I can't adjust he shader clock, so I don't know how it works. Look at 4745454b's post above for info on shaders.

A fast core means that your graphics card produces more... errr... graphics per second. :) However, if your memory is slow, those graphics will have to wait around in your card. Also, from my experience (which is not much), the memory has a higher impact on performance. I've gotten about 3 fps more per 35-30 MHz of memory, while only about 1 fps for about 25-30 MHz of core.

I usually overclock at 5 MHz increments, play Skyrim or NFS Hot Pursuit for an hour, and then increase another 5 MHz.
My core seems pretty stable, as it was 600 MHz and I've overclocked it to 705. But the memory hits a wall 487 MHz, with stock speed being 400 MHz.
 
Also, from my experience (which is not much), the memory has a higher impact on performance.

While I don't know what card you have, I can tell from your frequencies that your card has GDDR2 ram. When you have a modern GPU paired up with slow GDDR2, then its being starved for memory bandwidth. I wouldn't be surprised to learn you see bigger gains with memory increases then with core increases. For those who have GDDR3/5 memory, this is usually not the case.

Edit: I should probably also add that you shouldn't be so caught up on clock speed either. 500MHz on an AMD 5xxx card can't be compared to 500MHz on an Nvidia card. You shouldn't even compare it to a 6xxx card as the arches are different.
 


I'm sorry if I posted something wrong, just trying to help. Like I said, I don;t have much experience.
Anyway, Merry Christmas!
 
Not so much a good GPU but good memory. As I said, you didn't tell us what card you had, but I can tell from the memory frequencies that it uses GDDR2. GDDR2 topped out around 500MHz actual, 1GHz effective. Anything running faster then that was on GDDR3. GDDR2 is rather limiting for today's GPUs meaning your card is memory bandwidth bottlenecked. Not all cards are like this. If you were OC'ing a 6970 you should see bigger gains by increasing the GPU clock rate as opposed to the memory.