How come you can't use multiple CPU's for the same program?

lw1990

Honorable
Aug 10, 2012
2
0
10,510
I'm new to building custom PC's and understanding PC components in general. I heard that the record for fastest CPU is somewhere close to 8.5 ghz when overclocked. I'm wondering why someone could not set up a computer to use multiple average CPU's, like four 3.4ghz CPU's to function on the same program, so the program would have access to 13.6ghz of number crunching power (and therefore be faster if it had a lot of work to do)?
 

Majestic One

Distinguished
May 1, 2011
257
0
18,790
8.5Ghz - heh yeah, it is impractical and unreal considering these 'hobbyists' are doing nothing but going for a world record (i mean they use liquid nitrogen to keep the thing cool).
there are server motherboards that allow you to use multi cpu's to work.
http://www.newegg.com/Store/SubCategory.aspx?SubCategory=302&name=Server-Motherboards
but i am pretty sure you can not use all 2 or 4 or 6 cpu's at once for say one game [citaion needed]. mostly these things are used for, well as you can tell by the mobo name type, servers; but also are widely used by graphic rendering artists. the dudes that MAKE your games LOL - and on top of that, i am finding out they use workstation video cards too for this heavy graphic intense rendering and generation and not gaming GPU's.
 
Because software doesn't work like that.

The lowest level software operates on is the "thread" level. Every process has at least one thread. The OS will schedule threads on the CPU, which the CPU then operates on.

Now things get complicated. Most programs, while they have many threads, will either
A: Have one or two threads that do the majority of the processing, limiting how well the program scales
B: Have many threads, but due to I/O waits, are unable to execute them in parallel

Very few programs are coded in such a way where they will scale across multiple CPU's. It CAN be done, but most programs simply don't work this way, by virtue of their design.
 

Majestic One

Distinguished
May 1, 2011
257
0
18,790

in other words lw1990, this CAN be done, but programmers don't really do this because home computer users are still using a single CPU as a standard practice and is an industry standard.
if the world switched to 4 cpu mobo's we would see more programs utilize multiple cpu's and their thread workers.
 

loneninja

Distinguished


Most software won't even use all the cores in a single processor, 1 thread is a single core not a whole processor, and the reason things are programmed this way is because time = money. Want a more complicated program that scales well on multiple cores, well it's gonna take more skilled programmers and more hours of labor.
 

Majestic One

Distinguished
May 1, 2011
257
0
18,790

yes, i understand. i was just trying to simplify that.
with time will come more complicated programming - but i going to assume (loosely) we will probably have something far different before what this thread is talking about will ever happen for basic gaming and simple apps. i am learning that workstation GPUS actually do that for 3d rendering.
http://www.tomshardware.com/forum/365007-33-hardware-rendering-setup-help
thanks for more input, i did learn some more myself from you. :hello:
 

PlusOne

Honorable
Jun 20, 2012
44
0
10,540
Keep imparting your wisdoms here! I'm a budding computer engineer trying to learn about parallel processing! Haha.

lw1990, in case these answers haven't been sufficient, a processor operates with a number of cores (usually a power of 2). These cores complete an instruction every one clock cycle (for the sake of example. in reality they can take longer).

So if you have two cores, they can complete two instructions every clock cycle. Seems like 13.6 GHz worth of compute power is within reach, right? Not quite...

Code is written in a way where your next operation often times relies on the answer to the previous operation. So say the code looked as follows:

add x and y, and save the result to z
now add z and n, and save the result to m.

Well, the computer can't do (x+y=z) at the same time it does (z+n=m), or else the value for z won't be updated yet.

Now, to get even more fun, you have shared "caches", or memory if you like. These caches contain data and instructions redundantly from memory (think RAM or HDD) so that the information can be quickly accessed. There are multiple levels of caches, and cores share caches at certain levels of caches (usually L3), but have their own caches at lower levels for super fast access (usually L1 and L2). If one core changes something in its L1 cache, then the other cores' L1 caches are now invalidated. It takes time for the new information to be placed into the other L1 caches, so there are even more real world delays.

Sorry for the super long answer, but caching really gets me going. The main reason computers are "so slow" any more has to do with how long it takes to access certain information, and caches look to reduce that time. Perfect caching, while impossible right now and maybe always, would result in much faster start times, response times, and the like.

Hope this helps!