Sign in with
Sign up | Sign in
Your question

Speculation: Can cpus keep up with the new Gpu?

Last response: in Graphics & Displays
Share
a b U Graphics card
May 17, 2008 6:48:38 PM

Today we see that a current cpu clocked at 3 Ghz can bring out the potential of current Gpus. The G280 is on the horizon. Its supposed to be twice as fast as the older gen. With the addition of more and more AI additions in future games a given, will the future cpus keep up with potential? Or will games for that matter?

More about : speculation cpus gpu

May 17, 2008 7:00:37 PM

Of course you are going to get a bottleneck, but what can do about it...

To get the best out of a 9800gx2 you will need a Quad @ 4.0ghz +

But a E2180 @ 3.20ghz will still use most of the new tech fully...


a b U Graphics card
May 17, 2008 7:04:37 PM

But speculation says the new G280 will be twice as fast as the X2. If thats true, then what will we need if a game ever shows up to require such a cpu/gpu? If its true (2 x X2), we already will have the gpu, and theres nothing showing on the cpu horizon
Related resources
May 17, 2008 8:21:24 PM

I'm not worried about it because I only ever use middle of the road technology because I'm cheep.

It can only be good news for me if GPU's start severly outpacing CPU's. That way a middle of the road GPU might give me the highest performance I can hope to achieve because of CPU bottlenecking.
a c 130 U Graphics card
a b à CPUs
May 17, 2008 9:00:22 PM

JAYDEEJOHN said:
But speculation says the new G280 will be twice as fast as the X2. If thats true, then what will we need if a game ever shows up to require such a cpu/gpu? If its true (2 x X2), we already will have the gpu, and theres nothing showing on the cpu horizon


Well logic says (to me anyway) that the developers are finally going to have to start doing some work on coding games properly for mulit core CPU's, It salso worth noting that this card is meant to have some physix capabilities of it sown that should take some of the load from the CPU, shouldnt it ?
There have been talks of chips like the Nehalem which is meant to have limited GPU ability down the road, maybe we read into it wrongly when that was mentioned, Is it possable that these will infact be some sort of pre graphics processors ? Kind of like an advanced compiler or some such ?
Your thoughts ladies and gents.
mactronix :) 
a c 275 U Graphics card
a c 341 à CPUs
May 17, 2008 9:11:25 PM

Game developers must design and code for the largest possible audience if they are to make money. Therefore a game should be playable on modest cpu and vga power. Over time, games will trail the development of increasing power and features, not the other way around.
a b U Graphics card
May 17, 2008 9:13:10 PM

As it stands now, the CPU is used for physics and AI work and the GPU for transformations, texturing, and lighting. The link between the two is the PCIe (2.0) bus, over which shader programs with the data and textures needed by them are sent to the GPU for each object to be rendered.

As CPUs increase in speed, at what point does it hurt performance to go through the process of sending programs and data to the GPU versus just doing the same work on the CPU? The new GPUs will be faster to process, once they get the shader program, data, and textures to use, but it is still up to the CPU to get all the info ready (perform animations, physics, AI, etc.).

Thankfully, we have companies already doing the work to determine how to best utilize the rendering pipeline. They are developing the current and future games that will use the current and new hardware. Their games can be used as benchmarks to tells us whether the new GPU cards will work well with the current CPUs. THG will of course test the holy %&^# out of them to let us all know!
a c 130 U Graphics card
a b à CPUs
May 17, 2008 9:17:29 PM

geofelt said:
Game developers must design and code for the largest possible audience if they are to make money. Therefore a game should be playable on modest cpu and vga power. Over time, games will trail the development of increasing power and features, not the other way around.


Yes very true but there is still room for ultra performance within that remit. Crysis is one such game and it comes down to what i was saying, namley codeing. If a game is coded well there is no reason why it cant be enjoyed on systems ranging from the outdated AGP rig my children inhereted from me to a top of the line Quad core GTX SLI system. You get a differant experiance obviously but thats only to be expected.
Mactronix :) 
a b U Graphics card
May 17, 2008 9:41:05 PM

so these new cards coming out do u expect a huge bottleneck from the cpu like a quad at 3.0ghz wont be enough? wats the difference 2-4fps difference more if i oc to 4.0ghz+ or much more.?
a b U Graphics card
May 17, 2008 9:54:23 PM

geofelt said:
Game developers must design and code for the largest possible audience if they are to make money. Therefore a game should be playable on modest cpu and vga power. Over time, games will trail the development of increasing power and features, not the other way around.


WRONG!! :ange: 

Games are the only reason most people have needed to upgrade their computers for the past 4 years. I am still using my P4 Northwood 3.0 with ATI X800XT. This system played HL2, Doom3, and Diablo 2 really well. I can't play the latest Splinter Cell on it, because that game requires a card with shader 3.0 support. Oblivion and FlightSimX also pushed the system limits above my system. However, Tiger Woods PGA 2006 is the last game I bought. So, I have not yet upgraded my computer, because I am not playing the new games. I also have not upgraded to Vista, but I am not sure if it would require me to upgrade anything.

Now we have Crysis, Bioshock, and a few others causing numerous discussions about what VGA card to get and whether we need the latest quad or dual core CPU. Again, if it weren't for these games, and people were just surfing the net and using office apps, any current CPU and a mobo with integrated graphics would be plenty for them.

So, I see games as the compelling force behind the hardware evolution, with Vista giving it a bit of boost as well (he concludes while typing this note on his antequated P4 3.0 system with AT X800XT GPU).



May 17, 2008 10:07:49 PM

invisik said:
so these new cards coming out do u expect a huge bottleneck from the cpu like a quad at 3.0ghz wont be enough? wats the difference 2-4fps difference more if i oc to 4.0ghz+ or much more.?


Or how will a Core 2 Duo at 3.4Ghz do? I'm really interested in getting a next gen card. I really doubt my lack of a dual core and "only" 3.4Ghz will be that much of a deifference.

I'm sure I'd see an increase if I got more cpu power, but I doubt my current processor would really hold me back that much.....
a b U Graphics card
May 18, 2008 2:30:25 AM

No, not really. This is for speculation. Like in Toms article, a current cpu at the speed of 2.6 is adequate for the current gpus. Also, like PuaDlH has pointed out, current cpos need even more for SLI to be brought up to full capability. So my question (no panicking heheh) is will cpus keep up? And what impoact will it have? Today we see the evolution of graphics cards doubling in one new arch. The next? If its the same thing, then what? The pace of the gpu is outpacing the cpu. Will that matter down the road? OH, and by the way , a few weeks after my post, several more were made, not by me, but others wanting newer drivers. Which did come out several weeks after that, only almost 5 months later from the last driver
a b U Graphics card
May 18, 2008 5:46:53 AM

Heres some food for thought. Currently were seeing cpus getting wider (more cores) and not alot faster. Gpus on the other hand are a parallel chip, meaning the bigger it can get the more it can do. Right now were at 55nm. Theres still several die shrinks to be had before redundancy, so theres a lot of room for growth concerning gpus. However, cpus dont get the same boost from die shrinks that gpus do. More transistors doent make them faster, and generally not as efficient as gpus concerning transistor/production , or efficiency. Like Ive said, I believe that gpus will se several more doublings in performance before its redundant, but unless or until theres multi-threading, the cpus will appear to be less and less helpful in keeping up with the gpus
a b U Graphics card
May 18, 2008 8:11:53 PM

area61 said:
jaydee.......i have your answer
http://en.expreview.com/2008/05/12/intel-upgrade-gpu-ha...

Im not sure what to make of this. Imagine if in the future everything done on desktop will be 3D. It will happen. And since this is the graphics forum, , I would think that this has a more significant impact here than Intels small regard to "gaming" from the link. Does Intel not care about gaming? Does Intel not care about graphics? Intel is stalling . They dont have Larrabee out, they havnt quite spent their billions of dollars for something so frivolous as a gpu yet. This is such a joke, Intel acts like they dont need gpus. The future points directly in that path. Otherwise you wouldnt be seeing Intel going into the business and spending the billion or whatever amount doing so. Are people that stupid to believe this hipocracy? The gpu hasnt been the bread and butter of the 86 enviroment since the beginning like the cpu has. The cpu has had everything thrown at it , tested, perfected, tried and true. Now its the gpus turn. Theres alot of programs/apps that the gpu can and should do, that the much slower cpu has done in the past. Give the gpu 25 years of perfecting these things, and see what it can accomplish. We are in an infancy regarding the capabilities of gpus usage. . The gpu insome of those areas is much much faster, and should be allocated for those things. If the current cpus dont get faster, and the gpus continue to do so, youll see more and more attempts at running things thru the gpu instead of the cpu. I dont believe in this Intel rhetoric. I dont believe in the Intel sham. They spend a billion while they say that gpus arent needed, yeah right
May 18, 2008 9:53:54 PM

I think much software still need to be optimized for more then 2 cores (even tho I think dual core support can be better as well).
a b U Graphics card
May 18, 2008 11:54:43 PM

If you look at the history of Gpus, the older 1xxx series and the 7xxx series dont show any progress from todays higher clocked cpus, even with ocing. But the newer Gpus showed that theres alot more for growth in a faster Cpu. Now we have a new gen coming out. Its said to be 30 to 40% faster than the 2x9800GTX. In the next gen of Gpus, we will likely see near a doubling once again. We will see how todays Cpus fare with the newer Gpus, but I feel unless we see multithreading before the gen of Gpus, we will see a huge loss of potential performance with our Gpus for lack of a better, faster Cpu
May 19, 2008 2:24:05 AM

i cant agree enough with jay.i did read some where that to wash away the bottleneck of 9800gx2 X2(quad sli),we supposedly need a C2D at 5.9 Ghz.where are we going?i can clearly see intel has hit the wall with 4 gigs.pushing any further with ease requires a whole new arch.just like what happend to netburst.
a b U Graphics card
May 19, 2008 5:01:56 AM

Actually it depends on how a particular game works, when it comes to how much work the CPU and GPU must do. You can look at current games to see this.

The GPU determines the resolution you can run at and the level of antialiasing (AA) that is used, because it performs the functions of coloring each pixel and performing AA. It also determines the level of shadow detail and special effects that can be done.

The CPU is where physics, animation, and AI calculations are performed. It is here that the game determines what geometry is to be drawn, where it is to be drawn, and for animated geometry where each bone (for skeletal animation) is to be drawn. It is also where resource management of your objects and their textures take place (where the amount of memory you have comes into play).

The CPU can get bottle-necked waiting for the GPU to finish its last rendering request. The GPU could actually get under used waiting for the CPU to render the next frame.

In current games, most of the settings available to you are based on what the GPU can do, and let you adjust resolution, shadow detail, AA, and the amount of special effects done (like grass waving in the wind and particle effects).

Crysis is one of the few games that seems to be maxing both. It will make a good bench marking tool for seeing how the new GPUs do on current and new CPUs.
a b U Graphics card
May 19, 2008 5:19:35 AM

And so the bar gets raised. My question is, whats next? A higher requirement, thats a given, on all points of our systems. Intel claims it wants the Cpu to do even more, at this point, Im just not so sure it can be done. Certainly not without multithreading.
May 19, 2008 6:44:49 AM

if you ask me, multi threaded is not going to solve the problem.yes its good in complex calculations.but its needs to do it fast enough.having 80 cores but at the speed of 2.4ghz is just simply a waste of wafers.selling them singly would be the sane decision.exactly how many games are taking advantage of four let alone two cores?IMO, intel should have released a single core version of c2d.the effect could been seen from there and if there are very little gainable differences, it just means we can actually get a single core without paying a premium for two cores.
a b U Graphics card
May 19, 2008 6:52:40 AM

You have a point. Imagine a single core C2d with ocing who knows how high? Exclusive for gaming even. And lower needs systems, but binned lower of course.
May 19, 2008 6:58:46 AM

Since games are GPU limited for the most part I don't think you have anything to worry about.

Current budget CPU like 2160 without AA and low resolutions or low settings still pump out over 100fps.

Sure you can pump more frame rates if you have a faster CPU with a monster video card but you won't have frame rate issues to play a game.
May 19, 2008 7:07:31 AM

slapping two cores to one make double the heat production.in that case, the heat limits the speed of the core can actually soar.that makes it a better ocer.a similarly clocked single core C2D would certainly be faster than a similarly clocked C2D,just like the netburst.the p4 3.0 ghz is no match for a pentium D at 3.0ghz in terms of raw brute mind bogling insane teramatic performance.yum.i want one.
a b U Graphics card
May 19, 2008 7:13:40 AM

We will know that cpus arent a bottle neck soon, I have a feeling. Crysis will be playable soon, and then we will see if the cpu is becoming a bottleneck. I agree marv, youre right, for now. Im saying with the next arches to come out, which may double the output on todays current gpus, which should come 18 mpnths from now. It could become a problem if multithreading isnt done. 18 months from now, games should be more demanding etc. Anywho, thats my take
May 19, 2008 11:25:35 AM

if the game developers code the games to take advantage of multi cores,then youll have your answer.multicore is here to stay.whether speed or more brains will be in the hands of the developers
May 19, 2008 3:59:05 PM

area61 said:
jaydee.......i have your answer
http://en.expreview.com/2008/05/12/intel-upgrade-gpu-ha...


This just came confirming what i was suspecting all along. Maybe a really big can of whoop ass is going to be delivered, and im not talking IGPs here. This show Skulltrail being beaten by a 8600GT. Thats a 4000$ vs 100$ comparison.

Maybe Nvidia is going for the server market here. I can see where this may apply. Structural Calculation (Civil Engeneering) for once. Heavy Rendering by CAD alike softwares. The possibilities are endless. Because the truth is this one. Unless we are gaming, upgrading the GPU wont do much good. Of course we gamers already know that our GPUs are much more powerfull than our CPUs. This CUDA thingy is just taking the bar a few notches up. A freaking good few notches up.

After seeing this, im seeing a Barcy with NVIDIAs in SLI replacing several branches in the server/workstation market. Intel already said it wont use SLI with Larrabee.

Maybe a good can of whoop ass will be delivered. It will be fun to see. Ill be delighted.
a b U Graphics card
May 19, 2008 4:36:48 PM

If we stop and think, since day 1 the cpu has gotten all the input from devs concerning desktop, apps etc on x86. The server market is always the beggining of things to come on desktop. Give 25 years worth of growth towards the gpu using CUDA type software, and who knows how itll all turn out? Even running at 20% capability a gpu is still faster than cpus on many things. I know theres limitations, but who really knows the true end of those limits? If we take what is known now, sure the cpu looks great, but I think the gpu will have its day, Intel knows this, and so does AMD/ATI. Let Intel slam gpus all they want while they invest their billions in them heheh To me, who looks the fool? Cause Im not being fooled
May 19, 2008 4:55:11 PM

the conventional way of cpu and gpu will exsist.Larrabee was ought to stop this conventional method,but its not going to.probably in the discrete section but not the midrange and mainstream.but even in the discrete section,they will be easily pawned by AMD.afterall, AMD's 780g can handle DX10 in vista with aero aswell.my guess is that by the time larrabe rools out,AMDs solution would have grown leaps and bounds.wise men say plant your seeds now and wait,you will be rewarded.
May 19, 2008 5:12:22 PM

JAYDEEJOHN said:
If we stop and think, since day 1 the cpu has gotten all the input from devs concerning desktop, apps etc on x86. The server market is always the beggining of things to come on desktop. Give 25 years worth of growth towards the gpu using CUDA type software, and who knows how itll all turn out? Even running at 20% capability a gpu is still faster than cpus on many things. I know theres limitations, but who really knows the true end of those limits? If we take what is known now, sure the cpu looks great, but I think the gpu will have its day, Intel knows this, and so does AMD/ATI. Let Intel slam gpus all they want while they invest their billions in them heheh To me, who looks the fool? Cause Im not being fooled


Today im having a really slow day at work (and i mean freaking slow). I have enough time atm,so, me and some other my mates we took a peek at CUDA and Ati Stream Computing.

As i said, its being a reeaaallly slow day here at work. So we are having fun reading C++ code, dependencies, libraries, well, anything to keep the mind ocupied. Check for yourselfes if you have enough time :

ATI/AMD:
http://ati.amd.com/technology/streamcomputing/sdkdwnld....

Nvidia:
http://www.nvidia.com/object/cuda_get.html

Nvidia one seems a bit more structured and has a less beta feeling. Ati one already has the basis to take off. From what ive read and tried so far, both are very promising. They directly inflict the kernel and send instruccions directly to the GPU.
Honestly, i think or Intel takes pulls a full house, or will lose loads of market on this one.

Im not a professional coder, i just know my bits and tricks. This seems very powerfull. If Adobe (for example) adopts CUDA or Ati Stream (or both), we will see the big workstations with **** X2 or Core2 CPUs. And big GPUs doing the work.
We didnt tested here at work, but mate, ill try to mate a few tests when i arrive home. Too bad my X800XT isnt included. Grrr. Ill try anyway. Bof, a prime number auto-generator in C++ will do the trick. Ill match my X800XT with my 4800 X2. Lets see who bytes the dust.

Ill try to post the results later on. If anybody knows a bit of c++ (all the libraries point there) put a Intel Core2 Vs a Nvidia 8xxx. Use a prime number generator code. its easy to code and tis basicly floating point calculations when it starts hitting the big numbers.


Edit: Typ0s everywhere and massive hype from a extremely bored worker atm.
a b U Graphics card
May 19, 2008 5:47:01 PM

Heres another interesting read, going contrary to many posts found here on the forums. http://www.guru3d.com/article/cpu-scaling-in-games-with... If your rig falls within the guidelines shown on this link, or on Toms article, then cpu isnt a better scenario. Its gpu thats best for your gaming. The conclusion at Guru3D is the same as Toms, if your cpu is running at 2.6, it cant help any further. Thats with todays cards. The G280 is going to change that, and soon
a b U Graphics card
May 19, 2008 7:55:44 PM

The next big jump is for ray tracing to replace vertex and pixel shaders. It cannot be done on the GPU, but will require major advancements in CPU power.
May 19, 2008 8:26:23 PM

If GPU outpaces CPU, then AMD and Intel are into another great war.
Will they be?
a b U Graphics card
May 20, 2008 12:57:54 AM

If AMD can survive, then yes, this situation actually gives AMD more life, as they do have a huge lead in their graphics division. Also, even tho Intel has had the money, AMD had been actually working on theirs longer, having ATI around, so that helps alot
May 20, 2008 9:24:29 AM

DXRick said:
The next big jump is for ray tracing to replace vertex and pixel shaders. It cannot be done on the GPU, but will require major advancements in CPU power.


It doesnt ? The way intel is trying to do it is by muscleing their way through with many-core arquitecture. We arent talking about a optimized solution, we are talking about pure muscle (or loads of cores, witch btw, are quite poor solowise).

Note: do you really think they can make a cpu with 80 cores flawlessly ? Hell no. They will slap several versions (40,50,70,80 cores) with disabled (read damaged) cores. And even so, it will be are to produce so many cores flawlessly. I think its not short of a utopia.

When Netburst came out, they were talking about a 10 Ghz CPU. Netburst proved to be a bad arquitecture.
They will say Nehalem's arquitecture will get as much as 80 core. I think history repeats itself.
First of, i have yet to see a CPU doing a decent job in the GFx side. The links i provided before (from ATI and NVidia) proves that is easy to port most apps, so they, instead flooding the CPU, use the GPU in another tasks.

I dont consider x86 and x64 dead yet, because honestly, there is already too much software coded for it. But i think this CUDA and Ati Stream, may lead to a breakthrough in software development, that will have first on a higher level ( Workstations/servers). They will adopt it first due to sheer perfmonce leap.

ATI/AMD will survive because it has a freaking plataform. (CPU/GPU/NB,SB)
NVIDIA will survive because you just need to slap a cheap (x86 or x64 CPU) to use the monster gpus.

The question now, is not replacing vertex or pixel shaders for gaming mate. After spending a good time of the afternoon reading the CUDAs manual and reference papers, i believe a really big can of whoop ass is already on its way.
Its headed to Intel by the GPU makers. We are talking about relieving the CPU from some functions he hes doing atm.

So CPU performance will be felt even less.
a b U Graphics card
May 22, 2008 5:32:09 AM

I googled CUDA and read about it on Wikipedia. It is hard to see from that how it differs from HLSL or extends the graphics pipeline. To do physics, I believe that the GPU would have to be able to perform trig and calculus functions, which are not available today.

GPUs are optimized for linear algebra (vertex and matrix caclulations). The application sends the shader program to the GPU and then streams the geometry of an object to it. The vertex shader operates on the vertices to convert them to homogeneous clip space. The pixel shader determines the color of each pixel.

In other words, the GPU programs operate on one vertex or pixel at a time. They do not have the entire geometry of one object, nor that of many objects, nor a bounding sphere, or anything else that would enable them to perform inter-object calculations, such as those to perform collision detection.

It is up to the application to perform the various physics, animation, and AI operations that set the world matrix for each object (or each bone for animatied objects) prior to sending the matrix(s) to the shader program and then streaming the geometry to the graphics card.

What they are planning to do the future is not very clear, but I think they are trying to do away with the separation of duties that currently exists between the CPU and GPU. Having to move textures and data from the computer memory to the graphic's card memory is costly. A more integrated solution, where they have the CPU and GPU share the main memory would make sense to me. Maybe this is what Intel and AMD are working towards?
May 22, 2008 6:19:48 AM

JAYDEEJOHN said:
So my question (no panicking heheh) is will cpus keep up? And what impact will it have? Today we see the evolution of graphics cards doubling in one new arch. The next? If its the same thing, then what? The pace of the gpu is outpacing the cpu. Will that matter down the road?


IMHO, this will begin to be addressed with Swift. GPU cores on the CPU might be aimed (initially) at the notebook and entry level market, but eventually one or two GPU cores will be matched to 3 or more CPU cores to get a balance.

Hopefully, there will be some leeway to add discrete GPU's such that we'd have CrossfireX or triple SLI added to the mix. If GPU's can process some of the physics down the line, then that balances out the CPU limitation issue.

Of course, I could be wrong. Most people tend to go for monster Nvidia GPU's without a care as to how drivers "optimize" (i.e. Crysis' demo water snafu) to get a few extra fps in popular FPS that give people around 10-20 hours of gameplay. That market seems to want to have to overclock the CPU to match the monster GPU's they buy every 6 months or so.

Still, for the mainstream gamer like myself, buying a future AMD CPU with matching GPU cores is a nice idea. If it can run LOTR Online at least as well as my current 3870x2, then it will suit me fine. Single player CRPG's are getting a bit too dark (first Oblivion, then The Witcher), and neither Crysis nor Gears of War 2 appeal to me.

So, perhaps this is the last high end GPU I'll buy to try to keep up with current games. Especially if the midrange delivers solid performance. Even more so if that midrange GPU is integrated into a 6-8 core CPU.


!