Speculation: Can cpus keep up with the new Gpu?

Today we see that a current cpu clocked at 3 Ghz can bring out the potential of current Gpus. The G280 is on the horizon. Its supposed to be twice as fast as the older gen. With the addition of more and more AI additions in future games a given, will the future cpus keep up with potential? Or will games for that matter?
 

dos1986

Distinguished
Mar 21, 2006
542
0
18,980
Of course you are going to get a bottleneck, but what can do about it...

To get the best out of a 9800gx2 you will need a Quad @ 4.0ghz +

But a E2180 @ 3.20ghz will still use most of the new tech fully...


 
But speculation says the new G280 will be twice as fast as the X2. If thats true, then what will we need if a game ever shows up to require such a cpu/gpu? If its true (2 x X2), we already will have the gpu, and theres nothing showing on the cpu horizon
 

rfatcheric

Distinguished
Apr 11, 2008
127
0
18,690
I'm not worried about it because I only ever use middle of the road technology because I'm cheep.

It can only be good news for me if GPU's start severly outpacing CPU's. That way a middle of the road GPU might give me the highest performance I can hope to achieve because of CPU bottlenecking.
 


Well logic says (to me anyway) that the developers are finally going to have to start doing some work on coding games properly for mulit core CPU's, It salso worth noting that this card is meant to have some physix capabilities of it sown that should take some of the load from the CPU, shouldnt it ?
There have been talks of chips like the Nehalem which is meant to have limited GPU ability down the road, maybe we read into it wrongly when that was mentioned, Is it possable that these will infact be some sort of pre graphics processors ? Kind of like an advanced compiler or some such ?
Your thoughts ladies and gents.
mactronix :)
 
Game developers must design and code for the largest possible audience if they are to make money. Therefore a game should be playable on modest cpu and vga power. Over time, games will trail the development of increasing power and features, not the other way around.
 

DXRick

Distinguished
Jun 9, 2006
1,320
0
19,360
As it stands now, the CPU is used for physics and AI work and the GPU for transformations, texturing, and lighting. The link between the two is the PCIe (2.0) bus, over which shader programs with the data and textures needed by them are sent to the GPU for each object to be rendered.

As CPUs increase in speed, at what point does it hurt performance to go through the process of sending programs and data to the GPU versus just doing the same work on the CPU? The new GPUs will be faster to process, once they get the shader program, data, and textures to use, but it is still up to the CPU to get all the info ready (perform animations, physics, AI, etc.).

Thankfully, we have companies already doing the work to determine how to best utilize the rendering pipeline. They are developing the current and future games that will use the current and new hardware. Their games can be used as benchmarks to tells us whether the new GPU cards will work well with the current CPUs. THG will of course test the holy %&^# out of them to let us all know!
 


Yes very true but there is still room for ultra performance within that remit. Crysis is one such game and it comes down to what i was saying, namley codeing. If a game is coded well there is no reason why it cant be enjoyed on systems ranging from the outdated AGP rig my children inhereted from me to a top of the line Quad core GTX SLI system. You get a differant experiance obviously but thats only to be expected.
Mactronix :)
 

invisik

Distinguished
Mar 27, 2008
2,476
0
19,810
so these new cards coming out do u expect a huge bottleneck from the cpu like a quad at 3.0ghz wont be enough? wats the difference 2-4fps difference more if i oc to 4.0ghz+ or much more.?
 

DXRick

Distinguished
Jun 9, 2006
1,320
0
19,360


WRONG!! :ange:

Games are the only reason most people have needed to upgrade their computers for the past 4 years. I am still using my P4 Northwood 3.0 with ATI X800XT. This system played HL2, Doom3, and Diablo 2 really well. I can't play the latest Splinter Cell on it, because that game requires a card with shader 3.0 support. Oblivion and FlightSimX also pushed the system limits above my system. However, Tiger Woods PGA 2006 is the last game I bought. So, I have not yet upgraded my computer, because I am not playing the new games. I also have not upgraded to Vista, but I am not sure if it would require me to upgrade anything.

Now we have Crysis, Bioshock, and a few others causing numerous discussions about what VGA card to get and whether we need the latest quad or dual core CPU. Again, if it weren't for these games, and people were just surfing the net and using office apps, any current CPU and a mobo with integrated graphics would be plenty for them.

So, I see games as the compelling force behind the hardware evolution, with Vista giving it a bit of boost as well (he concludes while typing this note on his antequated P4 3.0 system with AT X800XT GPU).



 

yadge

Distinguished
Mar 26, 2007
443
0
18,790


Or how will a Core 2 Duo at 3.4Ghz do? I'm really interested in getting a next gen card. I really doubt my lack of a dual core and "only" 3.4Ghz will be that much of a deifference.

I'm sure I'd see an increase if I got more cpu power, but I doubt my current processor would really hold me back that much.....
 
No, not really. This is for speculation. Like in Toms article, a current cpu at the speed of 2.6 is adequate for the current gpus. Also, like PuaDlH has pointed out, current cpos need even more for SLI to be brought up to full capability. So my question (no panicking heheh) is will cpus keep up? And what impoact will it have? Today we see the evolution of graphics cards doubling in one new arch. The next? If its the same thing, then what? The pace of the gpu is outpacing the cpu. Will that matter down the road? OH, and by the way , a few weeks after my post, several more were made, not by me, but others wanting newer drivers. Which did come out several weeks after that, only almost 5 months later from the last driver
 
Heres some food for thought. Currently were seeing cpus getting wider (more cores) and not alot faster. Gpus on the other hand are a parallel chip, meaning the bigger it can get the more it can do. Right now were at 55nm. Theres still several die shrinks to be had before redundancy, so theres a lot of room for growth concerning gpus. However, cpus dont get the same boost from die shrinks that gpus do. More transistors doent make them faster, and generally not as efficient as gpus concerning transistor/production , or efficiency. Like Ive said, I believe that gpus will se several more doublings in performance before its redundant, but unless or until theres multi-threading, the cpus will appear to be less and less helpful in keeping up with the gpus
 

Im not sure what to make of this. Imagine if in the future everything done on desktop will be 3D. It will happen. And since this is the graphics forum, , I would think that this has a more significant impact here than Intels small regard to "gaming" from the link. Does Intel not care about gaming? Does Intel not care about graphics? Intel is stalling . They dont have Larrabee out, they havnt quite spent their billions of dollars for something so frivolous as a gpu yet. This is such a joke, Intel acts like they dont need gpus. The future points directly in that path. Otherwise you wouldnt be seeing Intel going into the business and spending the billion or whatever amount doing so. Are people that stupid to believe this hipocracy? The gpu hasnt been the bread and butter of the 86 enviroment since the beginning like the cpu has. The cpu has had everything thrown at it , tested, perfected, tried and true. Now its the gpus turn. Theres alot of programs/apps that the gpu can and should do, that the much slower cpu has done in the past. Give the gpu 25 years of perfecting these things, and see what it can accomplish. We are in an infancy regarding the capabilities of gpus usage. . The gpu insome of those areas is much much faster, and should be allocated for those things. If the current cpus dont get faster, and the gpus continue to do so, youll see more and more attempts at running things thru the gpu instead of the cpu. I dont believe in this Intel rhetoric. I dont believe in the Intel sham. They spend a billion while they say that gpus arent needed, yeah right
 

Security

Distinguished
May 14, 2008
21
0
18,510
I think much software still need to be optimized for more then 2 cores (even tho I think dual core support can be better as well).
 
If you look at the history of Gpus, the older 1xxx series and the 7xxx series dont show any progress from todays higher clocked cpus, even with ocing. But the newer Gpus showed that theres alot more for growth in a faster Cpu. Now we have a new gen coming out. Its said to be 30 to 40% faster than the 2x9800GTX. In the next gen of Gpus, we will likely see near a doubling once again. We will see how todays Cpus fare with the newer Gpus, but I feel unless we see multithreading before the gen of Gpus, we will see a huge loss of potential performance with our Gpus for lack of a better, faster Cpu
 

area61

Distinguished
Jan 8, 2008
280
0
18,790
i cant agree enough with jay.i did read some where that to wash away the bottleneck of 9800gx2 X2(quad sli),we supposedly need a C2D at 5.9 Ghz.where are we going?i can clearly see intel has hit the wall with 4 gigs.pushing any further with ease requires a whole new arch.just like what happend to netburst.
 

DXRick

Distinguished
Jun 9, 2006
1,320
0
19,360
Actually it depends on how a particular game works, when it comes to how much work the CPU and GPU must do. You can look at current games to see this.

The GPU determines the resolution you can run at and the level of antialiasing (AA) that is used, because it performs the functions of coloring each pixel and performing AA. It also determines the level of shadow detail and special effects that can be done.

The CPU is where physics, animation, and AI calculations are performed. It is here that the game determines what geometry is to be drawn, where it is to be drawn, and for animated geometry where each bone (for skeletal animation) is to be drawn. It is also where resource management of your objects and their textures take place (where the amount of memory you have comes into play).

The CPU can get bottle-necked waiting for the GPU to finish its last rendering request. The GPU could actually get under used waiting for the CPU to render the next frame.

In current games, most of the settings available to you are based on what the GPU can do, and let you adjust resolution, shadow detail, AA, and the amount of special effects done (like grass waving in the wind and particle effects).

Crysis is one of the few games that seems to be maxing both. It will make a good bench marking tool for seeing how the new GPUs do on current and new CPUs.
 
And so the bar gets raised. My question is, whats next? A higher requirement, thats a given, on all points of our systems. Intel claims it wants the Cpu to do even more, at this point, Im just not so sure it can be done. Certainly not without multithreading.
 

area61

Distinguished
Jan 8, 2008
280
0
18,790
if you ask me, multi threaded is not going to solve the problem.yes its good in complex calculations.but its needs to do it fast enough.having 80 cores but at the speed of 2.4ghz is just simply a waste of wafers.selling them singly would be the sane decision.exactly how many games are taking advantage of four let alone two cores?IMO, intel should have released a single core version of c2d.the effect could been seen from there and if there are very little gainable differences, it just means we can actually get a single core without paying a premium for two cores.
 

marvelous211

Distinguished
Aug 15, 2006
1,153
0
19,280
Since games are GPU limited for the most part I don't think you have anything to worry about.

Current budget CPU like 2160 without AA and low resolutions or low settings still pump out over 100fps.

Sure you can pump more frame rates if you have a faster CPU with a monster video card but you won't have frame rate issues to play a game.
 

area61

Distinguished
Jan 8, 2008
280
0
18,790
slapping two cores to one make double the heat production.in that case, the heat limits the speed of the core can actually soar.that makes it a better ocer.a similarly clocked single core C2D would certainly be faster than a similarly clocked C2D,just like the netburst.the p4 3.0 ghz is no match for a pentium D at 3.0ghz in terms of raw brute mind bogling insane teramatic performance.yum.i want one.