Sign in with
Sign up | Sign in
Your question

CPU Scaling with HD4870x2

Last response: in CPUs
Share
September 3, 2008 2:39:06 AM

Apologise if this is a repost.

http://www.legionhardware.com/document.php?id=770&p=0

Interesting article on how the CPU cores and speeds scale with the fastest GPU there is at the moment.

I find that it shows that games are using multiple cores more and more as these benchmarks see the Quads ontop of the Duals more often than not. Would be nice if an article like this could be produced on a GPU thats more in most peoples price range, ie 9800GT or GTX / on the otherside of the boat a HD4850 or HD4870. Clearly is sees overclocking the CPU can gain alot more performance on a high end GPU.

More about : cpu scaling hd4870x2

September 3, 2008 2:50:29 AM

chookman said:
Clearly is sees overclocking the CPU can gain alot more performance on a high end GPU.


It doesn't really show that though, due to the way they set it up. They only oced the q9650, the most expensive cpu, to 3.6ghz, yet didn't oc the q6600 at all. I'd like to see how much q6600@3.6ghz will outperform a stock q9650. Or how much q9650@3.6ghz will outperform q6600@3.6ghz, if there's a difference in games at all.
September 3, 2008 4:29:16 AM

They overclocked the e8400, x4 9950 and 6400+ as well. I would think the underclocking that they did on the other CPU's also shows that raising the speed on the same cpu's would have similar effects.
Related resources
a c 127 à CPUs
September 3, 2008 6:55:18 AM

Um only 2 games showed a decent return in a higher clocked CPU. They were Quake and UT3. They got about 10%. The rest got about 1-2% boost from the Q9450@ 2.66GHz and the Q9450 @ 3.6GHz.

What I find more interesting is with the same GPU and amount of memory A Phenom X4 9950 @ 3GHz is either matched or beaten by a Q6600 @ 2.4GHz.

I guess I will keep this link because it shows that most games are GPU dependant at resolutions such as this and higher.
September 3, 2008 7:42:55 AM

What I find interesting is, theres no ceiling in a few of these games. In other words, no matter what the oc, the fps kepp climbing. This wasnt true a few short years ago. If you cranked a FX60 past 2.6, it showed no impronement. Thats not th case now. The games chosen are somewhat suspect, as theyre more adaptible to MT, and it shows with the quads. They wasted alot of time on this bench running quads at below stock speeds, as no other quads exist below those numbers. Would have been better to see more games, but as is, very imformative
a b à CPUs
September 3, 2008 8:09:44 AM

JAYDEEJOHN said:
What I find interesting is, theres no ceiling in a few of these games. In other words, no matter what the oc, the fps kepp climbing. This wasnt true a few short years ago. If you cranked a FX60 past 2.6, it showed no impronement.


so i guess intel's claim that CPU is more powerful than gpu is not really correct.
September 3, 2008 8:21:16 AM

All this shows is, gpus have gained more abilities in what they do than cpus, as they grow at a higher performance rate. Remember, 2 years ago, cards were maybe 90nm, while Intel was on 65 for awhile already. Starting early next year, gpus will be going 40nm, a smaller process than cpus. Plus, theres been many changes regarding the bw abilities, such as pci2, thats almost quadrupled the bw, as well as things like GDDR5 which can run as fasr=t as 6.4 Ghz. These things/numbers were under cpus numbers just 2 years ago. The cpu cant live without the gpu, and vice versa. Now if only they made a faster cpu....
a c 127 à CPUs
September 3, 2008 9:27:10 AM

^Jaydee what do you mean? Take a stock C2Q @ 2.66GHz. Most of the games only say a increase of 3-4FPs or about 1-2% @ 3.6GHz. Thats almost a 1GHz OC and the increase was minimal and thats at 1920x1200 with everything and AA/AF maxed out.

How does that show that the CPU is in any way limiting the games? Now it does show the differences between dual, quad and CPU archs but the difference between 2.66Ghz and 3.6GHz is at most 10% and thats only on Quake Wars and UT3. and 10% is not as much as if you could swap a GPU that could potentially increas your FPS at that res by 30%+.

Its simple. You look at the same CPu from its lowest speed to its highest speed and then you see the difference. You can't look at a Pheonm @ 3GHz and the C2Q @ 3.6Ghz.

Hell in one benchmark a Q6600 @ 2.4GHz is at the same performance as a Phenom 9950 @ 3GHz.

I will agree that at much lower clock speeds it does make a difference mainly because at the low end of the FPS average it is CPU dependant. But once you hit 2.4GHz+ that dependance goes away and the FPS increase is not even worth the OC. And most CPUs sold today are about 2.4GHz average.

A good CPU will not limit what the GPU can truly do especially since most enthusiast or gaming PCs normally have a good CPU thats either a high stock speed or OCed very well. I will also agree that neither the CPu nor GPU can live without the other. Its a give and take relationship.
September 3, 2008 9:36:03 AM

Heres their conclusion :Finally, Unreal Tournament 3 painted an interesting picture, as there was a significant difference in performance between the fastest and the slowest processors tested. Once again the Core 2 Quad Q9650 was the king here, delivering 132fps at its default clock frequency of 3.0GHz, which was impressive when compared to the E6850 for example, which managed just 118fps at the same frequency. The best Phenom X4 processor was of course the 9950, which averaged 113fps, edging out the old E6700 by 2fps.

Again the Radeon HD 4870 X2 limits were not seen, as the results continued to improve with the processors clock speed. While we always suspected that the world’s fastest graphics card would require the world’s fastest processor, we know now for certain that it does. Those with lower-end Core 2 Duo/Quad processors can squeeze a great deal of performance out of the Radeon HD 4870 X2 simply by overclocking their processor past 3.0GHz, which is well within the limits of these processors.

Now, after reading their conclusion, reread what I said. Where did I say limit experience? I didnt. I said, cpus today, eevn oceed wont see the end potential of this card, which wasnt true a few years ago. Simple as that. So, I conclude, at the rate of performance in gpus, and the demands that COULD be placed in future games, the cpu will have to have a lessor load compared to the ever higher performance of gpus compared to cpus
September 3, 2008 10:19:20 AM

Heres the most telling thing about the article : Out of the five games used for testing, Devil May Cry was the only one that allowed the processors to find the limits of the Radeon HD 4870 X2. It would seem that this game is more GPU dependent than the others This is something to take note on, as Ive said before, this is just the beginning. Like I said, doing all those tests at lower than exist speeds is a shame, as their focus was obviously comparing older cpus to newer ones, or Intel vs AMD, but what they barely touched on was the fact that current cpus, at stock speeds cant get the most out of these cards, and as games become more demanding, theyll have to keep in mind the slower abilities of cpus vs gpus. I saw some high fps in these games, nothing traumatically uneeded tho. If you follow this to its conclusion, thats all Ive been saying all along. Future games will become more and more limited by cpus, simply because cpus cant keep up with demands, and their speed improvements, whther its Ghz or simply IPC isnt there. This will allow gpus to shine, and Im thinking both AMD saw this first, and now with Larrabee, Intel does as well. In essence, the cpu wont be as glamorous so to speak vs the gpu tomorrow as it is seen today, which ultimately converts to sales
September 3, 2008 3:20:02 PM

All current games are ultimately GPU bound when coupled with a decent CPU.

Sure, a faster CPU will make the pointless difference between 76 and 79 FPS, but a faster CPU will not make the relevant difference between "unplayable" and "smooth as butter". If your game is CPU bound, you're just not taxing the card enough (low resolution, low details, old game).

Some reviews claimed 8800 GTX SLI setups were CPU bound two years ago. Yet, strangely, there are formidable performance leaps when the same CPUs are used now with GTX280 SLI setups.
September 3, 2008 3:41:33 PM

Exactly. What this means is, the gpus have gotten better, and its still the same ol cpus. Basically toure proving what Im saying. Those old GTXs could be topped out with these cpus. Thats no longer the case
a c 127 à CPUs
September 3, 2008 8:21:51 PM

^You are still totally misreading it. The CPU is not stopping the 4870X2 from performing its best at that res. If it did then the difference with the Q9650 @ 2.66GHz and then @ 3.6GHz in all the games would have been much greater than 1-2% or even more than 10%.

UT3 as well is supposed to be multicore friendly so besides the fact that the Q9650 itself has 10% better IPC per clock it also is a quad compared to a dual. So with that IPC advantage and the quad it should outperform a dual in the same game that is said to be multi-core friendly, although with the results I would say its not much better with a quad like it should be.

At that res and higher most games are limited by the GPU. The only thing the CPU can limit it is the Physics since most games use Havok and use the CPU for that. But it still will not slow the 4870X2 down.
September 3, 2008 8:27:29 PM

Its not only physics that makes demands on the cpu in games, actually far from it. And so youre saying that their own conclusions arent right? Yet you defend their findings? Ill again quote the conclusion :: Out of the five games used for testing, Devil May Cry was the only one that allowed the processors to find the limits of the Radeon HD 4870 X2. It would seem that this game is more dependent than the others " Now what do you mean exactly, as I find this very straightforwards in their conclusion?
September 3, 2008 8:32:05 PM

Rendering games thru the gpu is taxing on the cpu without the inclusion of phisics. As games become more advanced, therel be an even higher demand on them. An oceed cpu cant get the best out of this card in certain games already, the higher you crank the cpu, the higher the fps go. That means the "bottleneck" if you will, is the cpu, not the gpu. This is widely known. Whats so hard to understand?The overall balancing curve on most of these games on this article is shown here. 2 short years ago, you neednt oc your cpu at all to find a ceiling on fps, the gpus were maxxed out. Thats not so now.
a c 127 à CPUs
September 3, 2008 9:03:14 PM

I think their results are interesting across the board of the different CPUs especially the Phenom results.

But I do not agree with their results. Lets go with the quad cores. Their lowest speed quad core was the Q6600 @ 1.6GHz. They then compare that to a 3.6GHz Q9650 that alone has a 10% IPC advantage. but if you take the same chip @ 2.66GHz and see that at 3.6GHz in 3/5 games the gain was only 1-2% with almost a 1GHz OC. And just because UT3 and Quake show 10% they get all hyped up and think the CPU is limiting games when the other games show 1-2% with 1GHz OC?

No I think they are assuming too much. Current GPUs are not limited by a CPU. And from 2.4GHz and up as their test show the FPS gains are minimal.

Lets try DMC4 as an example. The FPS increase from the Q6600 @ 2.4GHz to the Q9650 @ 3.6GHz is 5 FPS. 5 FPS. If thats the CPu limiting the GPu then what the hell? There is a 1.2GHz difference along with 10% IPC per clock making it about 1.32GHz so its more like running a 3.8GHz CPU and there is only a 5FPS increase. With that amount of extra power added to the CPU, if the CPU was limiting the GPU it should have been a bigger boost than that.

But here is the kicker, at that res and with that much AA/AF most of the textures are loaded to the GPUs memory and all the rest is processed through the GPU.

So yes I think their conclusion is wrong when clearly from 2.4GHz+ there is no FPS gain to be able to conclude that the CPU is limiting the GPU.
September 3, 2008 9:11:57 PM

If the gpu is sitting waiting on the cpu, youll see higher fps with higher cpu clocks. Theres better findings out there, and better games to show what Im saying as well. When I come accross them, Ill post them. Thing is, this never ever never used to happen. Thats my point. No matter what you did cpu wise, it wouldnt matter, not 1 frame. Thats no longer true. And, 9 months from now, when theres solutions thatre 20-30% faster or more, we will see more of it, and eventually become common.
September 3, 2008 9:18:12 PM

Id like to add this. 2 years ago, when it didnt matter, you neednt even oc the cpu to max out your gpu, that was also at a 12x10 res. So, when we get to the point where we only see no increases in 25x16, essentually, weve run out of room for demands on the gpu. To me, this is very telling. If Intel is right, and the gpu is dead, and gpgpu functions arent all that, then its simple. Just dont do it Intel, dont make one, and everything will be as Intel says
September 4, 2008 9:59:38 AM

It's a matter of perspective.

The graphics card can only render a frame once the CPU has finished computing the state of the world. Player actions, NPC behavior (AI), physics, scripts and whatnot. So naturally, the more of these things you throw in there (i.e. Real-Time Strategy Games), the more CPU bound a game will be. Heck, I remember playing Civilization 3 on a 2 Ghz Athlon64. Giant world map, 32 civs - towards the end it would take the AI about 30 minutes to complete a turn. Games like Civilization are purely CPU bound. Not matter what Quad-SLI setup you throw in there, it won't get any faster.

But most games are also GPU bound. Because the CPU can only compute the next frame once the GPU has finished rendering the last. And the more polygons there are and the more shader effects you apply, the longer the CPU has to wait.

Most modern graphics intensitive games are both, CPU bound and GPU bound. You will see better fps with a better CPU or with a better GPU. But in the end, they're more GPU bound.

You can couple a 4870 X2 with a 1.6 Ghz DualCore processor, which was maybe tech state-of-the-art in 2005, and you will still get playable framerates at max resolution in most current games. But if you pair a top-of-the-line Skulltrail Dual-QX9770 system with a graphics card from 2005, say a GeForce 7800 GTX, you probably won't be able to play at max res.

And that's why modern eye candy games are ultimately GPU bound.
September 4, 2008 5:57:43 PM

Thats true to a point as well. Look as the common res in 2005. The 7800GTX tied with a 9770 wont play any better than on a 1.6 C2D dual either. You see you have to approach this from both ends, since both have the potential of becoming the "bottleneck". Obviously, the 7800GTX wont gain a thing from a 9770. But, a 4870x2 will not only gain from a 9770, bot oc it, and itll gain even more. That, like I said, has never happened before until recently. The 8800GTX showed just a lil on this, but its not even half the card the 4870x2 is. Problem is, the current best cpu today isnt over twice as fast as that cpu then. 9 months from now, the gap will grow some more, and so on. The cpu just doesnt lead, or better, shouldnt be percieved as always ahead. Now of course it HAS to be ahead, but that doesnt mean its fast enough to get as much out of the gpu as it can, and thats where were heading. Id add that the 1.86 cpu of yesterday wouldbt bring any better results by ocing the then top cards out, thats been proven, unlike todays top cards will, even with the top cpus. This gap will grow. As far as eye candy games, those games require a larger use of the cpu as well, of the many facets of rendering require both cpu and gpu function/ability. DX11 will help regarding this, but the demands will still be growing. As far as eye candy, DX11 will address these problems to the benefit of the gpu, making the gpu actually light loaded by using DX11 vs previous DX gens. Some of this ability exists in DX10.1 and has been seen to show these load lighteners if you will, by actually removing an entire cycle from the gpu with same results for AA for example, and theres more to come. As the DX libray expands, the cpu is none the less burdened, save for the potential DX11 multithreading ability, at which time holds promise, but as with MT, any time its been tried, it isnt perfect. Time will tell.
!