Sign in with
Sign up | Sign in
Your question

Multicore gpus- is it possible?

Last response: in Graphics & Displays
Share
May 3, 2006 6:14:58 PM

Will it possible for graphics manufacturers to make multicore GPUs?

I know it is a stupid question but may i plz get an answer?

Thanx a lot

More about : multicore gpus

a b Î Nvidia
May 3, 2006 6:49:25 PM

Quote:
Will it possible for graphics manufacturers to make multicore GPUs?


Sure it's 'possible', but why bother, cards have gotten far more parallel than CPUs (think of it like 16/24 different processors in parallel instead of pipelines if it helps), and then they've also gotten unbalanced in the wya those processors work.

You could do it, but why bother when the various part would communicate far better and make better use of components that only require 1 to accomplish a task (like having 48 shader processors, but only a few ROPs).

As the designs progress to a unified architecture you'll find seperate core as being counterproductive.
May 3, 2006 7:28:00 PM

Quote:
Will it possible for graphics manufacturers to make multicore GPUs?


Sure it's 'possible', but why bother, cards have gotten far more parallel than CPUs (think of it like 16/24 different processors in parallel instead of pipelines if it helps), and then they've also gotten unbalanced in the wya those processors work.

You could do it, but why bother when the various part would communicate far better and make better use of components that only require 1 to accomplish a task (like having 48 shader processors, but only a few ROPs).

As the designs progress to a unified architecture you'll find seperate core as being counterproductive.

Correct me if I'm wrong... SLI/Crossfire essentially qualify as dual core GPUs... yes, I realize they're two seperate cards, but it's the same concept as a dual core GPU.

If I understand you correctly, basically you're saying CPUs are ideally suited to have multiple cores where as GPUs are going to have diminishing returns when/if utilizing multiple cores.
Related resources
May 3, 2006 7:43:46 PM

Wildcat Realizm 800 features a unique Wildcat Realizm Vertex/Scalability Unit (VSU) and dual Wildcat Realizm Visual Processing Units (VPU) to deliver over 700 GFLOPS of floating-point graphics processing.


It's possible and done but know this card it's outdated.
May 3, 2006 9:37:51 PM

I agree with the Ape.
GPUs already have 16+ pipes while CPUs have (at best) 3 pipes.
The GPUs seem to scale much better.
So, later we may see GPUs having 96+ pipes or 32 pipes with 3 shader units per pipe.
Technically, I rather call SLI "double" core rather than "dual" core because the dies are very seperate.
So, no. I don't see dual core GPUs in the see-able future.

What may come is QUADRUPLE core ships with 2 dies for GPU, 1 PPU, and 1 die for AI.
May 3, 2006 9:40:54 PM

You're not wrong. Personally, I believe that multi-core processing is inevitable, regardless of the type of processor (GPU/VPU/PPU et. al)
It is currently easier to add more cores than than it is to increase the processing power of a single core. This is one reason why we don't see 10GHz Pentiums or 6GHz Athlons. Following this argument, it stands to reason that we are far more likely to see a dual-core GPU than we are to see one that is actually capable of using all the bandwidth available in a current PCIE lane, much less the lanes of the future. (32X, 64X, ad infinitum) Also, both ATI and Nvidia are developing their own physics processing solution as we speak. It is not outside the realms of possibilty that they would add another core to tackle this task as well. Considering the unbelievable public desire for more intense gaming performance and the fact that multi-core technology is already in place and perfectly stable, I feel it is a certainty that by 2008/2009 we will see our first multiple solution/multiple core Game Processing Unit.
a b Î Nvidia
May 3, 2006 10:00:26 PM

Quote:

Correct me if I'm wrong... SLI/Crossfire essentially qualify as dual core GPUs... yes, I realize they're two seperate cards, but it's the same concept as a dual core GPU.


No you're wrong. It's not the same concept one is a frickin mess of other hardware, the other is on a single die or single package. Different concpet comletely.

Quote:
If I understand you correctly, basically you're saying CPUs are ideally suited to have multiple cores where as GPUs are going to have diminishing returns when/if utilizing multiple cores.


No I'm saying the advanatages are different, and so is the architecture. Is the Cell a single core with alot of co-processors, or multi-core? The cell would more closely match the VPU layout in CPU form, Whereas the Xbox processor Xenon would be closer to multi-core design as all component of a 'core' would reside on all three.

With the VPUs switching to uneven distribution Piepes versus ROPs, pixelshader ALUs versus TMUs, etc. And the move to completely unified design where each segment can do all the require operations, it makes sense to have everything in one parallel core than to have any divisions to the process.
a b Î Nvidia
May 3, 2006 10:30:41 PM

Quote:
You're not wrong. Personally, I believe that multi-core processing is inevitable, regardless of the type of processor (GPU/VPU/PPU et. al)


They are not the same, therefore it's a spurious argument to simply say it's inevitable. Back that up with something relatable to VPUs.

[/quote]It is currently easier to add more cores than than it is to increase the processing power of a single core. This is one reason why we don't see 10GHz Pentiums or 6GHz Athlons.[/quote]

CPU and VPU are not relatable, parallelism has been in graphics chips alot longer than in CPU design. Your argument only holds for the point in which you run out of surface area due to process constraints, and then you don't save anything by having dual core but by having multi-core like SLi even if they are side by side on the same package or the same pcb.

Quote:
Following this argument, it stands to reason that we a far more likely to see a dual-core GPU than we are to see one that is actually capable of using all the bandwidth available in a current PCIE lane, much less the lanes of the future. (32X, 64X, ad infinitum)


One has NOTHING to do with the other. PCIe has nothing to do with the design of the chip itself. That is depending on how much processing is done on die or on the pcb versus the amount of information need to go back and forth between the PC and the card.

Quote:
Also, both ATI and Nvidia are developing their own physics processing solution as we speak. It is not outside the realms of possibilty that they would add another core to tackle this task as well.


Why bother when the point is to avoid the cost of adding another chip/silicon? With the unified design being able to handle it based on thread dispatch to the various indivdual processors that already exist. And if that's the reasoning, then the PPU makes more sense than the VPU for physics and their argument for what's better is lost.

Quote:
Considering the unbelievable public desire for more intense gaming performance and the fact that multi-core technology is already in place and perfectly stable,


It's not already in place, SLi is already in place and AFR is already in place, but they stilll act independantly; why do you think there's AFR and SFR divisions of labour, true multi core would be able to process the whole image together the way the single VPU with multiple pipes does.

Quote:
I feel it is a certainty that by 2008/2009 we will see our first multiple solution/multiple core Game Processing Unit.


Perhaps, but I don't see either company heading in that direction, and with the unified model it's actually contrary to the direction they are taking. More likely what you'll see when they reach their yield limit is something closer to SLI that would look something like this for IBM's Power 5;



Except for it would be a board not a mega socket.

Like I said the design of the chip itself incorporate to discrete core make no sense and would require added silicon where they're trying to reduce it.
May 3, 2006 11:02:15 PM

The Power 5 is a processor, correct? Looks to me like it has multiple dies and, thusly, multiple cores. All on one big honking chip. That's my point.
And when I said that multi-core technology was already in place, I was not referring to GPUs.
I still feel that multi-core processors are the wave of the future. In my opinion your outlook is overly myopic. Single-core CPUs, while a viable option HERE AND NOW, will become a rare breed within 5 years. Ten years from now they may well be nigh impossible to find. It's simply the evolution of processor technology combined with the increasing need for running multiple threads. Oblivion, for instance, shows an improvement with dual-core CPUs. What exactly makes you think that a multi-core GPU could not garner comparable improvements?
I believe that it will happen, you believe it will not. Time will tell.
a b Î Nvidia
May 3, 2006 11:34:03 PM

Quote:
The Power 5 is a processor, correct? Looks to me like it has multiple dies and, thusly, multiple cores. All on one big honking chip. That's my point.


No it's multiple chips containing 2 cores each on one package (ie the substrate). I'm not sure if it's 2 core 1 die per chip or 2 core 2 dies per chip, but there are two cores per piece of silicon and then the chips are put on one gigantic package/socket.

The terminology gets confusing but the Xenon is multi-core (one chip using 3 dies sharing L2 cache), the Original P4D-8series was one piece of silicon with two dies 1 core each, the P4D-9 series is 1 die 2 cores. The Power 5 is 8 core 4 chips. It has multi-core aspect but it's power is in the amount of chips place on a single package. Now this is my point, 'mutli-core' makes little sense, multi-chip will make sense and even become 'necessary to an extent' when fab processes reach limits of moore's law or at least reach diminishing returns with regards to surface area and regards to transsitor layout and density.

Quote:
And when I said that multi-core technology was already in place, I was not referring to GPUs.


Ok, that's true then of course.

Quote:
I still feel that multi-core processors are the wave of the future. In my opinion your outlook is overly myopic.


Never, but that's your optics on my position.

Quote:
Single-core CPUs, while a viable option HERE AND NOW, will become a rare breed within 5 years.


True, but that's required a heck of a shift in both manufacturing and programming. Also remember that CPU design is extremely different than VPU design, they are already brushing up against moore's law and are hiting ridiculously faster speeds than VPUs. Also their design constraints are different, upgrading CPU easy, upgrading vpu impossible. You buy a whole new card with a vpu, you upgrade the cpu to match your motherboard with a faster CPU. And making it multi cpu requires a new, different, more expensive MoBo, upgrading the VPU you don't get any advantage anyways so who cares what the board mfr does? All those factors com into play as to the multi package versus multi core situation. Also you still haven't shown the benifit of 2 x 16 pipes versus 32pipes, let alone the limitations of both a unified shader design and the inability to process using 2 cores as if they were one. The dispatching when it's just pixel/vertex/geometry versus core a - core b plus that is a difficult task, and loops and such would create havoc for cross core communication, having parralel pipelines that share buffers and such makes far more sense. CPUs only work well with multi-core because of the massive amount of ground work before it with people like myself who've owned dual proc rigs, and even then the efficiencies are nowhere near that of the pipeline increase in a VPU.

Quote:
Ten years from now they may well be nigh impossible to find. It's simply the evolution of processor technology combined with the increasing need for running multiple threads.


But CPU and VPU design/programming are going in completely opposite directions. CPUs are dividing the work into pre-determined loads, and not sharing it based on processor needs. CPU0 still doesn't do half the work while CPU1 does the other half, they send tasks off under specified guidelines. Whereas the VPU divides all it's components up and send them to wher needed, all the time. The move to unified shaders means that those multiple parts just do the task needed it's all divided up, and no need to define things anymore (not even between pixel and vertex) it simply says this is the load, act like this, next! Having multiple core would add a level of complexity where you divide the easily divisible components, and then have to recomdine them later before recombining them into a coherent image. so 2 division and 2 recombinations instead of one break up into parts, and then one recombination. I just don't see that as being more efficient regardless of how fast it can be done.

Quote:
Oblivion, for instance, shows an improvement with dual-core CPUs. What exactly makes you think that a multi-core GPU could not garner comparable improvements?


Because VPUs are already far more parralel than CPUs, and turning it into essentially 'fast SLi/Xfire' at the core level will not improve performance as much as twice the pipelines or ALUs/TMUs/etc.

Quote:
I believe that it will happen, you believe it will not. Time will tell.


Just like everything else time is the only arbiter, but my statements about the probability of these options are based on the current benfits and limits of design and on the direction both companies are headed in their design, whereas your basis is only on hwat worked for general purpose CPUs.

Multiple threads already there, multiple packages on a card for sure and maybe eventually looking like the Voodoo5, and more necessary as fab processes gets to be an issue IMO.

But moving from massively parallel high number shaders/pipes to seperate cores, I think it's highly unlikely, and nothing that's been said here has changed my view on why it's impractical.
May 3, 2006 11:59:08 PM

Granted. I understand what you're saying and I appreciate your input. Who knows what the future holds? All I'm truly certain of is that it's gonna kick some a$$.
a b Î Nvidia
May 4, 2006 12:01:25 AM

Quote:
All I'm truly certain of is that it's gonna kick some a$$.


That's for sure, and it will be brought to you by Matrox and PowerVR, none of this ATi/nV crap! :twisted:
!