I'm baffled by it, but i did a series of benchmark tests (using Systester, which times how long it takes to compute digits of pi using Borwein's quadratic convergence) in which i concluded that - somehow - my intel i7 920 is able to compute complex math operations significantly faster when i have two threads active (each of the 4 cores is broken into 2 threads) than one, and the speed of computing these operations increases slightly as i enable more threads.
Does anyone out there have any idea why? Could it be in the processor architecture? I've looked at superscaling, superpiplining, vectorization, but i'm unsure as to whether these are implemented on each core or each thread. Any input would be greatly appreciated.
More about :mulicore run single thread programs single core
the processor can handle many instruction at once (alu complex fpu mmu), but when some instruction depend of the result of an other , the instruction must wait , creating a pipeline bubble , when enabling hyperthreading , the processor reuse this empty instruction for an other thread. making both thread a little faster (+/- 10%). Except Atom , all hyperthreading processor can decode and process two thread at the same time !