Closed Solved

GPU rendering vs CPU creating binary data - for 2D charting. ??

Hi there

"A graphics card is a computer's data-to-image translator. It takes the binary data from the central processing unit (CPU) and turns it into a screen image.
The CPU works with software to send data about the desired image to the graphics card.
The graphics card is responsible for working out the details of exactly how the pixels on the computer's monitor will create that image. After it does that, it sends the image through a cable to the computer's monitor."

I am building a PC system, and trying to understand whether I need to focus on optimizing the GPU or the CPU, for running charting software.

This image is of the software. A few times a second, each value will be updated; lines will move, change color, the text will change. Main point - fewer pixels change, and much less often, than for any type of gaming, or even playing a video.
For most of the items displayed, the PC will need to compute and calculate values; so it is not simply downloading and displaying raw data. This calculating gets done somewhere? (GPU, CPU, combination?)

Since there is no aspect of 3D rendering to the image, how much work would a GPU do to create this moving image?

My current computer suffers lag when processing/displaying 6 of these charts.... but I am wondering if the bottleneck is the GPU, or the CPU. (I can confirm that RAM is not the limiting factor, or the HDD)

Stats - Current system is Core2Duo 2.26 4MB L3 , 2GB@1066 , dedicated Mobility radeon HD 3650 (256MB) , 7200rpm HDD. XP 32-bit.
When charting
-both CPU cores move between 20-80% of use. At slow time, the 2nd core drops to around 00-40%.
-Software uses about 100MB of ram.
-HDD has a 0 disk queue length most of the time, occasionally spikes up to 100.

I am wondering which would provide the most appropriate benefit:
1) i7 4-core(8thread) 3.4Ghz 8MB L3, using the integrated HD 4000
2) 1) i5 4-core 3.4Ghz 6MB L3, and a GeForce GTS 450 (about 100-200% more powerful than the HD 4000)

It depends on whether to focus on the CPU or the GPU.
15 answers Last reply Best Answer
More about rendering creating binary data charting
  1. Best answer
    For work like that, it's all done by your CPU.

    as a rule of thumb, only software supporting OpenCL or CUDA technology will use your GPU for compute
  2. So that would mean the GPU is doing relatively little work. Would a Intel HD 2000 integrated would be more than sufficient for these types of charts?
  3. yes, but you should be looking much more at the CPU side of things, I would shoot for that i7
  4. I'm going to monitor the actual GPU use tomorrow.

    The i7 3770 is the first choice. The 3770k is 100Mhz faster for a bit higher cost.
    The top choice would be the 6-core(12 thread) i7-3970X – 3.5 GHz and 15MB of L3... on the wish list for the future.
  5. xeon alternatives.
  6. sounds like your budget is limited. if you're willing to Overclock, make sure you get a K-series chip. even if you save a few dollars and get say an i5 3570K, you can easily overclock it to 4.2GHz and it will run a bit faster than that i7. of course, you can do the same with a 3770K chip
  7. vmem said:
    sounds like your budget is limited. if you're willing to Overclock, make sure you get a K-series chip. even if you save a few dollars and get say an i5 3570K, you can easily overclock it to 4.2GHz and it will run a bit faster than that i7. of course, you can do the same with a 3770K chip

    (watching the GPU monitor, it appears that even with 8 charts running, the GPU usage stayed below 15%.
    The i7 with HD 4000 is about 100% more powerful than my current GPU, so that would be GPU use of 8%
    The i5 with HD 2500 (10% more powerful), would be a PU use of about 15%

    Here is about the maximum CPU use that I've seen:
    (During the fastest times, Core 1 will go as high as 90%)

    During slow times, the 2nd Core is between 0 and 20%

    What exactly does this graph show...
    1) Does it show a queue (data waiting to be processed)
    2) or does it show the CPU run-time, or % of CPU's maximum ability to process

    In other words, say the CPU graph shows 90%... and another bunch of data reaches the CPU to be processed. Would it be processed any slower, compared to if the CPU graph was showing 10%?
  8. the graph shows CPU run-time (which is never current since it uses a bit of CPU power to generate the graph anyway ;-)), hence why it's CPU usage "history"

    when CPU usage graph shows 90%, it means it's taking 90% of the CPU's processing power to work on the current load. it doesn't necessarily mean that because you have 10% left over you won't see improvement by moving to a faster CPU, CPU usage is a complicated thing that I don't fully understand, but you can think about it this way:

    those spikes up to 90% usage would be shorter and less frequent with a faster CPU, and you might get better performance.

    However, I notice that you're mainly using a single core... so moving to a fast, modern quad-core CPU may not actually benefit you much
  9. I actually started a thread posted at the CPU forum, and they are saying the same thing. The % is what % of the time the CPU is busy processing. So if it's busy processing for 90% of the time, a faster CPU would allow it to process faster, and be not-busy more % of the time. New data would get processed faster if the CPU is not already busy.
    at least that's what I can interpret the replies as :)

    Anyway, it seems clear - faster CPU = will help

    I am looking at a few options (still waiting to find out if the software can use multiple threads). Ivy bridges so 1600ram can be used:

    -An i3 at 3.4Ghz would be a big jump compared to the current 2.26Ghz, and still have 2 cores (and a slightly better GPU than the current one).

    -An i5 k would provide 3.4 and overclocking. 4 cores should be more than what I can utilize. (the HD 4000 of the k variant would be about twice as powerful as the current GPU)

    -an i7 3.5 would provide overclocking also, though for now I don't think the extra threads (8) would be used, so the i5 seems best to start with. Can always upgrade to an i7 later.
  10. I agree, overclocking an i5 is the highest you can go at the moment. you can get creative and overclock an i3 via the blk but that's a bit more advanced :)
  11. Highest for the cheapest. I've never OC'ed before so I think I'd stick with the k i5.

    I see you have your 3.3Ghz i5 running at 4.6. What does that do for the CPU temps?
    I'm intending to keep the PC silent or quiet at a few feet away, using fans. No sound of jet engines.. as I have done previously :)

    I think I'd start with something closer to 4.0. I don't know, I have to start learning first.
  12. at 4.6, my CPU hits around 72C at 100% load. as long as you have a roomy case with some nice fans you're fine. otherwise, most people can hit 4.0 or 4.2Ghz with an i5 without raising the voltage so it will run a bit cooler and quieter
  13. I would start by moving the multiplier to 40x, keeping the blk at 100. this will give you 4.0Ghz and chances are you will be stable. run something like prime 95 for maybe 10-15 minutes to test this stability. if you don't BSOD, you can move up a bit higher to maybe 4.1. continue until you BSOD and move back to your last stable setting. then run a stress tool such as prime95 for a longer priod. I would say minimum 1hr, ideally overnight or 24hr to prove true stability. if you don't BSOD there, you're good to go :)

    the CPU section have some great guides on overclocking!
  14. Thanks!
  15. Best answer selected by jonjan.
Ask a new question

Read More

Graphics Cards Computers CPUs Graphics