CPU Performance Boosted 20% When CPU, GPU Collaborate

Engineers at the North Carolina State University endeavored to improve the way both the CPU and the GPU perform by engineering a solution that sees the GPU execute computational functions, while the CPU cores pre-fetch the data the GPUs need from off-chip main memory. In the research team's model, the GPU and the CPU are integrated on the same die and share the on-chip L3 cache and off-chip memory, similar to the Intel's Sandy Bridge and AMD's APU platforms.

"Chip manufacturers are now creating processors that have a 'fused architecture,' meaning that they include CPUs and GPUs on a single chip," said Dr. Huiyang Zhou, an associate professor of electrical and computer engineering who co-authored a paper on the research.

"This approach decreases manufacturing costs and makes computers more energy efficient. However, the CPU cores and GPU cores still work almost exclusively on separate functions. They rarely collaborate to execute any given program, so they aren’t as efficient as they could be. That’s the issue we’re trying to resolve."

Zhou's solution was to have the CPU do the leg work by determining what data the GPU needs and then going and retrieving it from off-chip main memory. This in turn leaves the GPU free to focus on executing the functions in question. The result of this collaboration is that the process takes less time and simulations have found that the new approach yields an average improved fused processor performance of 21.4 percent.

The paper will be presented at the 18th International Symposium on High Performance Computer Architecture, in New Orleans, later this month. In the meantime, you can check out more details on the project here.

Follow @JaneMcEntegart on Twitter for the latest news.      

Create a new thread in the US News comments forum about this subject
This thread is closed for comments
    Your comment
    Top Comments
  • alvine
    in other news SSDs make your system faster
  • pg3141
    Will this mean anything for the current generation of hardware?
  • zanny
    warezmenothing really, and don't game developers already know this and have been doing this for some time.

    This is actually not true. Just FYI, credentials wise, I am a software engineer that doesn't work in gaming but plays plenty of games. I have used openGL / openCL / etc.

    PC game developers now have a technology that allows them to compute almost all game logic GPU side - openCL / CUDA - where before that had to be done CPU side. It is why a game like World of Addictioncraft used a lot of CPU resources when it came out, because it did collision detection CPU side because they wrote the game for an openGL standard that didn't support general computation outside vector processing on GPUs.

    Today, with openCL (you can't make a game that uses CUDA if its an Nvidia chip and something else if it is AMD when you can just write openCL and be cross GPU) you can do a lot of parallelizable things GPU side that were previously outside the vectorization paradigm openGL fixes processing on the GPU to.

    And the general pipeline of a game engine, at its basic roots, is process input (user, network, in engine message passing) -> update state (each agent reacts on a tick stamp to game world events) -> collision detection (to prevent overlapping models) -> GPU rendering of the game world. Today, everything but processing input can be offloaded to the GPU and done massively parallel through openCL / openGL.

    The next generation of games "should", if properly implemented, use so few processor resources besides file and texture streaming and processing key events and handling network packets that you might get 10% of one CPU utilized in an extremely high fidelity game that pushes the GPU to the limit but barely uses any CPU resources.

    It also makes no sense to do any of those parallel tasks CPU side either - GPUs are orders of magnitude faster at that stuff. It is why an i5 2500k for $225 will last you a decade but you can spend $1500 on 3 7970s in triple Crossfire and have them be outdated by 2015. Games are moving into a completely GPU driven architecture for everything, and it is a good thing. It hugely increases the performance you can get from a game.
  • Other Comments
  • pg3141
    Will this mean anything for the current generation of hardware?
  • warezme
    pg3141Will this mean anything for the current generation of hardware?

    nothing really, and don't game developers already know this and have been doing this for some time.
  • outlw6669
    Anonymous said:
    Will this mean anything for the current generation of hardware?

    It could, if programmers get behind it.