I Feel Like GPUs Have A Lot More to Calculate than CPUs

Is this true ^^? When it comes to graphics cards and I see the extremely detail that goes into every rendering, across millions of pixels, I think to myself that the graphics cards certianly seem to do much more work than the CPUs. I see the highly complex trigonomtry used in rendering video game graphics and I ponder what makes the graphics cards so superior for rendering than the CPUs.

So do graphics cards perform a lot more calculations than CPUs? If so why not make CPUs more like them? I do understand they have different architecture but still.
 
Solution
The short answer is that they do different things. GPUs do thousands and millions of repetitive and specialized operations, while the logic on the CPU tends to be more linear, has a larger instruction set, and branches a heck of a lot more.

For certain kinds of massively parallel computations, the GPU is actually more appropriate. For example, physics simulations often involve doing the same not-too-complex logic against millions or billions of data points per step. And this kind of processing gets pushed off to clusters of machines with high-power GPUs.
The short answer is that they do different things. GPUs do thousands and millions of repetitive and specialized operations, while the logic on the CPU tends to be more linear, has a larger instruction set, and branches a heck of a lot more.

For certain kinds of massively parallel computations, the GPU is actually more appropriate. For example, physics simulations often involve doing the same not-too-complex logic against millions or billions of data points per step. And this kind of processing gets pushed off to clusters of machines with high-power GPUs.
 
Solution
GPUs have limited instruction sets to perform specialized operations; specialized in completing certain tasks very quickly in parallel
CPUs are more versatile, being able to process many different types of data.

During graphically intensive applications (games), yes... the GPU goes into overtime.
But when you're browsing the web the GPU ain't doin squat compared to the CPU.


 
Its more of an oranges to apples comparison. GPUs are strong in parrallel processing, and as you mentioned they are excellent at rendering pixels and geometry.

CPUs on the other hand are strong in single threaded applications and are superior numbers crunchers. I would say it depends on the application which would do more "calculations". CPUs have been becoming more GPU like by adding more cores, but still they really aren't that close to being the same animal.

Take my work for instance. I have 3 monitors, but I run about 12-15 different programs. I have 2 monitors on one card and 1 on the other. But because my work is not graphics intensive, the GPUs are basically idle but the CPU is working very hard.
 
CPU is more serial in nature and GPU is built for parallelism. but as i understand it not all program are suitable with the way GPU are built. that's why despite GPU prowess in crunching number not all software can take advantage of that processing power. so why not built CPU like GPU? the problem is i think many software already built around how traditional CPU work. CPU maker cannot simply change that as that would break a lot of things. but intel in their part really did try to make x86 core to mimic GPU parallelism. hence the birth of Phi. but GPU maker like nvidia already invest a lot to convince more people to take GPGPU more seriously especially in software to the point it become a course that being teach in universities (CUDA). and they was somehow succeed in that effort (nvidia already win another two supercomputer contract for 2017-2018 time frame). it will be interesting to see how CPU parallelism going against GPU pure crunching power.