OpenCL In Action: Post-Processing Apps, Accelerated

Benchmark Results: vReveal On The FX-8150 And Radeon HD 7970

Using the same platform, we swap out the old Radeon HD 5870 in favor of the newer Radeon HD 7970 to gauge whether or not the GCN architecture has any bearing on our benchmark results.

When working with only one render effect, an admittedly lightweight metric, there isn't much difference. Even at 1080p, the Radeon HD 7970 leaves our FX processor at 12% utilization, whereas the Radeon HD 5870 dropped the FX's workload to 10%. That's right. Our data shows a slightly higher CPU load with the newer GPU.

Because this result reverses as we apply more effects, it's conceivable that the 7970's compute resources aren't being utilized as effectively under the light load. Meanwhile, the heavier burden lets the 7970's 2048 shaders stretch a bit. More important is that we’re still seeing low single-digit utilization with accelerated 480p video and a 4x performance gain with 1080p.

Understandably, there is no visible difference in render speed when switching up to the Radeon HD 7970 here. The test only goes up to 100%, and trading GPUs should have no impact on software-only processing.

Again, in terms of CPU utilization, we’re seeing almost no benefit from the Radeon HD 7970 under our heaviest vReveal load compared to the older Radeon HD 5870, despite the 7970's architectural advantages. This is still good information, though. It tells us that we can't always expect scaling that corresponds to the GPU's potency. Of course, this is going to vary by application and, in some tests, a faster graphics processor absolutely will mean better performance.

Apart from an almost imperceptible and insignificant nudge of the needle in the 480p software test, these results show the same 100% rendering seen with the Radeon HD 5870.

  • DjEaZy
    ... OpenCL FTW!!!
    Reply
  • amuffin
    Will there be an open cl vs cuda article comeing out anytime soon? :ange:
    Reply
  • Hmmm...how do I win a 7970 for OpenCl tasks?
    Reply
  • deanjo
    DjEaZy... OpenCL FTW!!!
    Your welcome.

    --Apple
    Reply
  • bit_user
    amuffinWill there be an open cl vs cuda article comeing out anytime soon?At the core, they are very similar. I'm sure that Nvidia's toolchain for CUDA and OpenCL share a common backend, at least. Any differences between versions of an app coded for CUDA vs OpenCL will have a lot more to do with the amount of effort spent by its developers optimizing it.
    Reply
  • bit_user
    Fun fact: President of Khronos (the industry consortium behind OpenCL, OpenGL, etc.) & chair of its OpenCL working group is a Nvidia VP.

    Here's a document paralleling the similarities between CUDA and OpenCL (it's an OpenCL Jump Start Guide for existing CUDA developers):

    NVIDIA OpenCL JumpStart Guide

    I think they tried to make sure that OpenCL would fit their existing technologies, in order to give them an edge on delivering better support, sooner.
    Reply
  • deanjo
    bit_userI think they tried to make sure that OpenCL would fit their existing technologies, in order to give them an edge on delivering better support, sooner.
    Well nvidia did work very closely with Apple during the development of openCL.
    Reply
  • nevertell
    At last, an article to point to for people who love shoving a gtx 580 in the same box with a celeron.
    Reply
  • JPForums
    In regards to testing the APU w/o discrete GPU you wrote:

    However, the performance chart tells the second half of the story. Pushing CPU usage down is great at 480p, where host processing and graphics working together manage real-time rendering of six effects. But at 1080p, the two subsystems are collaboratively stuck at 29% of real-time. That's less than half of what the Radeon HD 5870 was able to do matched up to AMD's APU. For serious compute workloads, the sheer complexity of a discrete GPU is undeniably superior.

    While the discrete GPU is superior, the architecture isn't all that different. I suspect, the larger issue in regards to performance was stated in the interview earlier:

    TH: Specifically, what aspects of your software wouldn’t be possible without GPU-based acceleration?

    NB: ...you are also solving a bandwidth bottleneck problem. ... It’s a very memory- or bandwidth-intensive problem to even a larger degree than it is a compute-bound problem. ... It’s almost an order of magnitude difference between the memory bandwidth on these two devices.

    APUs may be bottlenecked simply because they have to share CPU level memory bandwidth.

    While the APU memory bandwidth will never approach a discrete card, I am curious to see whether overclocking memory to an APU will make a noticeable difference in performance. Intuition says that it will never approach a discrete card and given the low end compute performance, it may not make a difference at all. However, it would help to characterize the APUs performance balance a little better. I.E. Does it make sense to push more GPU muscle on an APU, or is the GPU portion constrained by the memory bandwidth?

    In any case, this is a great article. I look forward to the rest of the series.
    Reply
  • What about power consumption? It's fine if we can lower CPU load, but not that much if the total power consumption increase.
    Reply