CPU Performance Boosted 20% When CPU, GPU Collaborate

Status
Not open for further replies.

warezme

Distinguished
Dec 18, 2006
2,450
56
19,890
[citation][nom]pg3141[/nom]Will this mean anything for the current generation of hardware?[/citation]
nothing really, and don't game developers already know this and have been doing this for some time.
 

Star72

Distinguished
Aug 5, 2010
179
0
18,690
Not really news. An AMD chip most likely performs better coupled with an AMD graphics card. It would only make sense on their part.
 

vittau

Distinguished
Oct 11, 2010
221
0
18,690
Well, transistors are reaching a physical limit, so we need any kind of optimization we can get. Let's hope this technology gets implemented soon...
 

stingray71

Distinguished
Oct 28, 2010
100
0
18,680
Often when gaming, my gpus and vram are at or near 100% while my system CPU and Memory are no where near being maxed out. Seems to me if the system memory and cpu were given some of the workload.

I know they are talking about cpu and gpu being on the same chip in this article, but it seems to me better utilization across the board could be taking place.
 

loomis86

Distinguished
Dec 5, 2009
402
0
18,780
[citation][nom]digitalzom-b[/nom]Maybe lower budget gaming PCs will be able to get away with integrated video solutions in the future.[/citation]

You are completely missing it. This research proves separate GPUs are STUPID. I've been saying for at least a year now that the integrated GPU is the future. Little by little, the Separate GPU is going to disappear until only the most extreme GPUs will remain as separate units. Eventually SOC will be the norm. RAM and BIOS will be integrated on a single die someday also.
 
[citation][nom]drwho1[/nom]Could this mean no more dedicated video cards in the future?I hope not.[/citation]
When they get to the point where top video solutions are on the chip with the CPU, we'll all need one mother of a cooling solution.
 
Now this is an old idea but is becoming possible now....
First we had the CPU and graphical calculation were made by it.
Then they made the GPU and graphical calculations were shifted onto it.
Then they made the Sandy Bridge & The Llano which calculated the graphics with the help of a secondary Chip on board.
Now they're getting the CPU and the GPU onto one single die....
Lots of work... done... all for the same result....
 

DRosencraft

Distinguished
Aug 26, 2011
743
0
19,010
Proper integration of command structure is not only a hardware issue, but also a software. For example, 3D rendering programs, which one would think would pull much of its data from the GPU during rendering, instead pulls all its power from the CPU (note I'm talking specifically about rendering). I would imagine that there should be some means of re-writing the code of a program like Maya or Modo so that the GPU helps out. It is a software instruction, after all, that allows for multiple cores or even multiple CPUs to work together.

I would think going the software route would not meet the efficiency angle these researchers seem to be looking at, but it might have better payoffs in terms of performance. Part of why CPUs with imbedded graphics controllers aren't more powerful than they are is that the heat generated is too much, something not really a problem if the GPU and CPU are on different dies as in a gaming rig. Again, APUs are more of an efficiency for low-power systems, not really performance. So the first uses of the ideas in this article will likely be either in smartphones, netbook/ultrabooks, tablets, or gaming consoles; any device that goes the SoC route. In short, means much more for the mobile industry than it does for enthusiasts and gamers.
 

shin0bi272

Distinguished
Nov 20, 2007
1,103
0
19,310
[citation][nom]loomis86[/nom]You are completely missing it. This research proves separate GPUs are STUPID. I've been saying for at least a year now that the integrated GPU is the future. Little by little, the Separate GPU is going to disappear until only the most extreme GPUs will remain as separate units. Eventually SOC will be the norm. RAM and BIOS will be integrated on a single die someday also.[/citation]

http://www.guru3d.com/article/amd-a8-3850-apu-review/11

As long as you dont mind not being able to upgrade your graphics without upgrading your cpu and can put up with lowered fps and overall system performance... then yes apu's are "the future". Oh wait were you speaking of the idea that some people have been throwing about that tablet gaming is the future? yeah no.
 

zanny

Distinguished
Jul 18, 2008
214
0
18,680
[citation][nom]warezme[/nom]nothing really, and don't game developers already know this and have been doing this for some time.[/citation]

This is actually not true. Just FYI, credentials wise, I am a software engineer that doesn't work in gaming but plays plenty of games. I have used openGL / openCL / etc.

PC game developers now have a technology that allows them to compute almost all game logic GPU side - openCL / CUDA - where before that had to be done CPU side. It is why a game like World of Addictioncraft used a lot of CPU resources when it came out, because it did collision detection CPU side because they wrote the game for an openGL standard that didn't support general computation outside vector processing on GPUs.

Today, with openCL (you can't make a game that uses CUDA if its an Nvidia chip and something else if it is AMD when you can just write openCL and be cross GPU) you can do a lot of parallelizable things GPU side that were previously outside the vectorization paradigm openGL fixes processing on the GPU to.

And the general pipeline of a game engine, at its basic roots, is process input (user, network, in engine message passing) -> update state (each agent reacts on a tick stamp to game world events) -> collision detection (to prevent overlapping models) -> GPU rendering of the game world. Today, everything but processing input can be offloaded to the GPU and done massively parallel through openCL / openGL.

The next generation of games "should", if properly implemented, use so few processor resources besides file and texture streaming and processing key events and handling network packets that you might get 10% of one CPU utilized in an extremely high fidelity game that pushes the GPU to the limit but barely uses any CPU resources.

It also makes no sense to do any of those parallel tasks CPU side either - GPUs are orders of magnitude faster at that stuff. It is why an i5 2500k for $225 will last you a decade but you can spend $1500 on 3 7970s in triple Crossfire and have them be outdated by 2015. Games are moving into a completely GPU driven architecture for everything, and it is a good thing. It hugely increases the performance you can get from a game.
 

rohitbaran

Distinguished
[citation][nom]greghome[/nom]Is this anything significant? sounds like another Captain Obvious Statement[/citation]
While the statement is kind of obvious, the work they have done isn't. If this can be included in commercial products, AMD's fusion will improve further in later hardware iterations.
 

wiyosaya

Distinguished
Apr 12, 2006
915
1
18,990
[citation][nom]rohitbaran[/nom]While the statement is kind of obvious, the work they have done isn't. If this can be included in commercial products, AMD's fusion will improve further in later hardware iterations.[/citation]
Yes, basically because they put some intelligence behind the decision of what goes to the GPU. There are some computational problems that do not make sense to send to a GPU because they are serial in nature, and would take just as long or longer to run on a GPU than they would on a CPU. Seti@Home had work units like this, and many contributors who run GPUs complained about this because it is an inefficient use of a GPU's power. Fortunately, the people who run Seti@Home listened and no longer send highly serial WUs to GPUs. It makes sense, to me at least, to intelligently decide what tasks are best suited for GPUs.
 

alidan

Splendid
Aug 5, 2009
5,303
0
25,780
[citation][nom]vittau[/nom]Well, transistors are reaching a physical limit, so we need any kind of optimization we can get. Let's hope this technology gets implemented soon...[/citation]

they are close, but we are probably a good 10 or so years off of hitting the real limit, at least size wise.
if we ever figure out 3d, it could make a cpu well over twice as fast, and fit in a smaller footprint.

[citation][nom]drwho1[/nom]Could this mean no more dedicated video cards in the future?I hope not.[/citation]

in the far future, yea, graphics cards are going to go the same way as sound cards. they will be fast enough to do damn newer everything for everyone, and the only people who want more will need a specialty item (probably wont be overly price inflated, due to size of the chips at the time)

[citation][nom]Zanny[/nom]This is actually not true. Just FYI, credentials wise, I am a software engineer that doesn't work in gaming but plays plenty of games. I have used openGL / openCL / etc.PC game developers now have a technology that allows them to compute almost all game logic GPU side - openCL / CUDA - where before that had to be done CPU side. It is why a game like World of Addictioncraft used a lot of CPU resources when it came out, because it did collision detection CPU side because they wrote the game for an openGL standard that didn't support general computation outside vector processing on GPUs. Today, with openCL (you can't make a game that uses CUDA if its an Nvidia chip and something else if it is AMD when you can just write openCL and be cross GPU) you can do a lot of parallelizable things GPU side that were previously outside the vectorization paradigm openGL fixes processing on the GPU to. And the general pipeline of a game engine, at its basic roots, is process input (user, network, in engine message passing) -> update state (each agent reacts on a tick stamp to game world events) -> collision detection (to prevent overlapping models) -> GPU rendering of the game world. Today, everything but processing input can be offloaded to the GPU and done massively parallel through openCL / openGL. The next generation of games "should", if properly implemented, use so few processor resources besides file and texture streaming and processing key events and handling network packets that you might get 10% of one CPU utilized in an extremely high fidelity game that pushes the GPU to the limit but barely uses any CPU resources.It also makes no sense to do any of those parallel tasks CPU side either - GPUs are orders of magnitude faster at that stuff. It is why an i5 2500k for $225 will last you a decade but you can spend $1500 on 3 7970s in triple Crossfire and have them be outdated by 2015. Games are moving into a completely GPU driven architecture for everything, and it is a good thing. It hugely increases the performance you can get from a game.[/citation]

i dont like the idea of a game running soley on the gpu, look at physx on a lower end card, you have to scale it back to the point it may as well not be there to get the game running at higher framerates.

 
Status
Not open for further replies.