Sign in with
Sign up | Sign in
Your question

Bottlenecking query

Last response: in Graphics & Displays
Share
May 21, 2009 6:14:52 PM

Hi all,
How can you tell whether a CPU is bottlenecking a GPU? Which values do you compare to be able to tell?

Thanks

AC

More about : bottlenecking query

May 21, 2009 6:53:34 PM

But does the pancake bottleneck the ears?? THIS is what I'm saying...sheesh I give up;)
Related resources
May 21, 2009 7:00:40 PM

ac3144,

We need more information. Which CPU & GPU? What Resolution do you Game in? The lower the resolution the more the CPU will have an impact on your FPS, the higher the resolution the more the GPU comes into play. It's tough to give you realistic information without knowing atleast these 3 things.
May 21, 2009 7:04:42 PM

Hi Serpent, I wasn't after specifics for my system, just an idea in general. Through a thread posted simultaneously (see 'learning about bottlenecking' in this very forum) I have managed to get a basic conceptual grasp thanks to Daedalus.
If you have any more input though I am all ears ;) 
Thanks
AC
a b U Graphics card
a b à CPUs
May 21, 2009 8:48:49 PM

If your CPU is 2.8 GHz and then you overclock it to 3.8 Ghz and still get the same benchmark scores for the GPU part or the same framerates, then your GPu is the bottleneck. This is one way to tell.

On the other hand, if your GPU scores start to really high, then it means your CPU has been holding it back.
May 21, 2009 9:06:26 PM

The idea is relatively simple and straightforward, though the actual process is a little more involved. Basically, it goes like this: first take you take a sample of a game's performance. Then you alter the game settings to change the load placed on the CPU or the GPU. It is imperative you keep the stress level the same for one of those two components, otherwise the whole test is shot. Once you change the settings, run the same test again and see what happens. That's basically the whole thing. Here are a couple examples:

You run the Counter-Strike: Source benchmark and you get a result of 100 fps. Then you increase the graphic settings (more load on GPU) and run it a second time, again scoring 100 fps. This indicates the CPU was the limiting factor, since the framerate did not change when you increased the workload on the GPU.

Next, you run a benchmark on Oblivion, scoring 50 FPS. You lower the graphic settings (less load on GPU) and run the benchmark again for a new score of 65 FPS. This means the GPU was the limiting factor, since framerate increased when you lowered the workload the GPU.


To get decent results, you need to remove every variable except for the game setting(s) you alter. Things such as replaying the same level, the same part of the level, doing the exact same things, having other entities/creatures behave the same way, etc etc. The more consistent you can get things, the more accurate your results will be.
May 22, 2009 9:06:56 AM

here's what i do for a general idea of whats throttling what.

I have a multiple monitor setup so for single monitor builds you'd have to log results and then go over them in excel.

start gpu-z and click tab where gpu load is displayed
start taskmanager and click on performance tab where cpu load is displayed
start the game and then compare both load results as you play the game. I also usually run fraps just to know fps.

most of the time my gpu runs at 99%, but when it drops i usually catch at least one core from my cpu near 100%.

gpu: 4850
cpu: q6600
May 22, 2009 9:09:01 AM

@efeat
Thanks for your reply. Can I ask what role RAM would play in the test you describe above? I'm thinking this would have an impact on the CPU performance using your test? Also, it seems from various reviews I have read that certain GPU chips/manufacturers work better with certain games (and vice versa) which would appear to be a confounding factor that would be difficult to control for on a single test basis.

For an example, it seems that Benchmark06 gives higher scores to Intel chips over AMD which could lead the unsuspecting to believe that AMD is inferior when that would not necessarily be the case, in a similar vein, using your test it would be difficult to draw a firm conclusion.

I'm not shooting you down honestly, thanks very much for your input, and I guess that what you describe would give you a rough guide just that you should rely solely on that result. Is that a fair veiwpoint?

Thanks and please continue chipping in everyone, I've already learnt masses using these forums.

AC
a b U Graphics card
a b à CPUs
May 22, 2009 2:14:04 PM

IMO, just look at the CPU usage while you play the game (using perfmon or other tools), if the usage is close to 100%, then yes perhaps there is a bottleneck. It can be more complex than that, but it should always be a starting point.
May 22, 2009 5:20:14 PM

ac3144 said:
@efeat
Thanks for your reply. Can I ask what role RAM would play in the test you describe above? I'm thinking this would have an impact on the CPU performance using your test? Also, it seems from various reviews I have read that certain GPU chips/manufacturers work better with certain games (and vice versa) which would appear to be a confounding factor that would be difficult to control for on a single test basis.

For an example, it seems that Benchmark06 gives higher scores to Intel chips over AMD which could lead the unsuspecting to believe that AMD is inferior when that would not necessarily be the case, in a similar vein, using your test it would be difficult to draw a firm conclusion.

I'm not shooting you down honestly, thanks very much for your input, and I guess that what you describe would give you a rough guide just that you should rely solely on that result. Is that a fair veiwpoint?

Thanks and please continue chipping in everyone, I've already learnt masses using these forums.

AC


Benchmark06...you mean 3dmark06?
3dmark06 is meant to be a total scoring of your system, not to determine the slowest point of the system or any 'bottlenecking' that's occurring. In fact, part of a 3dmark06 score is derived from two tests that are dedicated solely to checking the power of your CPU. Intel chips have had a leg up on AMD chips for a couple generations now, so the 3dmark06 results are not entirely inaccurate when it comes to illustrating the performance difference Intel and AMD. Of course, it should be noted that 3dmark06 places a very high value on your CPU, whereas most real games value a fast GPU.

Extra RAM would help the CPU, as it means the CPU can go straight to the RAM for its data instead of having to wait for the hard disk. That waiting would simulate a slow CPU, when in reality your CPU is being held back by your hard disk I/O. However, RAM is one of those things where you don't get ever increasing performance. Once you have enough memory to stop going to the hard disk, adding even more RAM wouldn't do anything - the extra sticks would just sit empty and unused. This is why 3-4 GB is the sweet spot and 6-8 GB is considered overkill. Unless you are doing very specific things, like professional work (heavy image/video editing, engineering, etc. etc), you won't see much benefit past 4 GB of memory.

I get the impression that you have some incorrect assumptions going into this topic, based on what you say about GPU chips performing differently in various games. Yes, you are correct in that some games tend to favor a particular manufacturer or a particular card, but you don't compare results between games or GPUs. When looking for bottlenecks, you have to compare a system against itself, against each game. If you ran a test on UT3 (which is traditionally known as a CPU-heavy game) and discovered your CPU was the limiting factor, you would conclude "my GPU is fast enough, it's my CPU that's holding me back." However, you cannot turn around and then apply that same conclusion to a game like Crysis, even though you're using the exact same computer. On a similar note, you can't take the results you got about UT3 and then apply them to a completely different system that's also running UT3. Look at the examples I used in my previous post - it's very possible to have those happen on one computer. The source engine is typically CPU intensive while Oblivion is GPU heavy. From the Oblivion data you'd be able to determine that with your particular system, with that particular game, with those particular settings, your GPU is the limiting factor for your framerate. Nothing more can be derived from it. When you change game titles or change computers, you need a whole new set of data - you can't carry anything over.

After seeing enough benchmarks and becoming familiar with CPUs, GPUs, and games, you eventually learn how to just eyeball some things. Most everyone can eyeball the absurd setups, like matching a gtx 295 with something like a 1 Ghz Pentium III or matching a core i7 with a PCI geforce 2 mx. It doesn't matter what games you run, you can tell those two setups are horribly mismatched. However, as the components get closer in strength it becomes harder to tell. Anybody could guess that a stock Q6600 with a single 8800 GT on Crysis @ 1920x1200 would be pretty GPU limited, but how about when the 8800 is replaced by a gtx 260? Resolution lowered to 1440x900? Settings turned up to high/very high? Not so easy to guess then, is it? I'm sure some people still could, but not many.

So, yeah, moral of the story is that when searching for the weakest link/bottleneck, computers can only be compared against themselves on a particular game. Changing the game settings gives you different data, changing the game title or the computer means you get to start a new set of data.
May 24, 2009 9:22:35 PM

@efeat -"If you ran a test on UT3 (which is traditionally known as a CPU-heavy game) and discovered your CPU was the limiting factor, you would conclude "my GPU is fast enough, it's my CPU that's holding me back." However, you cannot turn around and then apply that same conclusion to a game like Crysis, even though you're using the exact same computer."

This is the part I'm not getting with what you are saying. What would happen if you got the opposite result to what you described using Crysis i.e my GPU is holding me back? Then surely you are none the wiser?

Using games as a proxy for component measurement seems to me to only be able to give you a rough idea of potential bottlenecks. I can see how knowledge gained over time allows you to just 'eyeball' a situation and 'know' where the problem may lie, but is there a way in which some parameters of components can be compared to accurately identify a bottleneck. Is this just too involved a procedure to bother with? Is a rough idea good enough?

Also, you are right in that I don't have a good grasp of computing technology and I am all ears to any helpful input, and please don't think I am being awkward for the sake of it. I am a scientist of sorts and naturally question mine and others knowledge, I mean no offence, it's just sometimes it helps to sort the wheat from the chaff if you know what I mean (I'm not including you in that BTW;))
May 24, 2009 9:23:04 PM

Oh and yes I did mean 3DMark06, sorry ;) 
May 25, 2009 9:29:59 AM

If you did my test the opposite way (ran a test on Crysis and concluded your GPU was the bottleneck) then the only thing you would be able to conclude is that "for X settings on Crysis, I am GPU limited." In that circumstance (using X settings on Crysis) you wouldn't be oblivious to a CPU bottleneck, because there would not be a CPU bottleneck.
A CPU/GPU bottleneck is not something that a system just has or doesn't have. While there will always be some kind of bottlenecking/limiting going (a computer can only do an operation as fast as its slowest part goes), the actual piece of hardware doing the bottlenecking changes depending on what the computer is doing.

Pretend you had a Core i7 960 @ 5 ghz, then you matched it with a geforce 2 card. Yeah, that's horribly mismatched, and in every single game you play, you will be GPU bottlenecked.

Crysis? GPU limited.
Half-Life 2? GPU limited.
Far Cry 2? GPU limited.

However, what happens when you run something like calculator? CPU limited. A virus scan? CPU or hard disk limited. Even though the graphics card is so weak, the load being placed on the GPU is ridiculously small. It's so small, that even the 10 year old graphics chip is waiting on the latest and greatest CPU. Once you try and run a game, however, the GPU quickly gets overloaded and the entire system has to wait on the GPU.

With closer match ups between the CPU and GPU, it comes down to the specific game. Crysis and Far Cry 2 tend to stress your GPU more, so it's more common to have a GPU bottleneck on those titles. Half-Life 2/Source Engine games tend to be very CPU sensitive, so it's more common to see a CPU bottleneck on those titles.



Here's one final example that I think will more clearly illustrate the point:



First, take a hypothetical system of a core 2 duo @ 3.0 ghz, Radeon 4870 1GB, and a 1920x1200 screen. Next, we're going to take a game title. We'll use Crysis since everybody, love it or hate, at least knows what it is. Last, we need to pick some in-game settings that we like to use while playing. Let's say we typically play with everything on "High"
So we're playing along in a game of Crysis, and we notice that our framerates are not as smooth as we'd like them to be. We want to improve them by lowering the in-game settings, but first we need to figure out what component is being worked to its maximum and needs the lighter load: the CPU or the GPU.

(Note that I am just making this data up off the top of my head)
1) We take the base data by running a benchmark of our current settings: 1920x1200, high settings. We get 37 fps average.
2) We lower the load on the GPU and run the benchmark again: 1920x1200, medium settings. We get 54 fps average.
Since our framerates went up, we can conclude that at 1920x1200 with high settings, we are GPU bound (commonly known as being bottlenecked by our GPU.)
3) We try to lower the load on our GPU even further. 1920x1200, low settings. We get 54 fps average again.
Since our framerates stayed the same, we can conclude that at 1920x1200 with medium settings, we are CPU bound.
4) Even though we are getting faster framerates, medium settings are a bit low for our liking. We raise the settings to a mix of medium/high. We get 51 fps.
5) We lower the settings just a smidge. 54 fps.
6) We raise the settings as slight as we possibly can from step 5. 53 fps.

At this point, we have a good idea of where the bottleneck shifts between the CPU and GPU.
Whatever settings we used at step 5 is the 'turning point.' Graphic settings that are more aggressive than these settings means your GPU will the be limiting factor in your performance. Graphic settings that are less aggressive than these settings means your CPU will be the limiting factor in your performance.

When you get a new game/application or a new system/hardware, you have to start this entire process over again. As others have said, you can also just look at performance monitoring tools to see which components are running at 100% load, but typing out these more elaborate examples helps to illustrate the concept more.


(it's 4:30 a.m now...I'll proofread this later)
May 25, 2009 9:41:02 AM

Hi efeat, thanks for your reply. The bottom line is that bottlenecking is a relative AND dynamic phenomenon. When people post on the forums about whether x component will be BN'ed by y component, it is possible to say roughly whether they are matched but this will depend on alot of variables such as software being used etc, is this right?!

Thanks very much for explaining this to me and I hope you get some sleep!

Thanks
AC
May 26, 2009 4:19:11 AM

Yes, you have the concept correctly now. Bottlenecks are dynamic and change depending on the hardware in use and the applications being run.

Most people making posts on this forum about bottlenecking are just looking to see if the parts they are looking at are roughly matched. This is why you'll often see people asking back what resolution they use, what games they play, and how good of performance they are looking for - all those pieces of information can help give a more accurate prediction.
!