Closed Solved

What good is hyperthreading?

What good is hyperthreading? I have a quad-core processor, the i7-2600K. This processor, like pretty much many others since the Pentium 4 days, supports hyperthreading. This basically treats the processor as if it has not 4 cores, but 8. Does this split the performance potential per core in 2 different directions, essentially cutting the per-core power in half? Would I get a per-core performance boost if I disabled hyperthreading? That is, LAME converts my WAV files to MP3 at a 100x CPU-to-play time. If I disabled hyperthreading, so that the entire core is used instead of half, would the conversion rate increase to about 200x?
6 answers Last reply Best Answer
More about what good hyperthreading
  1. try it

    mac
  2. No, hyperthreading is a sharing of resources on two threads.

    One thread can either use all of the resources or two threads can share the resources. Sometimes one thread cannot use all of the resources in the core, so the sharing actually increases the speed of the processor.

    Back in the P4 days hyperthreading was implemented horribly. They've fixed the architecture since then.
  3. First, its one of the few CPUs since the days of the P4 that supports HT. When the C2D CPUs came they didn't support HT. Second, it doesn't split the resources or anything like that. As Haserath said it allows parts of the CPU that are idle because the other parts are working on something to not be idle.

    I'm not convinced HT was "implemented horribly" on the P4 chips. Remember that software back then wasn't well threaded, so having HT just wouldn't have helped much. Go back and look at video conversion tests involving the P4. They should show the P4 doing really well. P4s were faster at converting video (a well threaded task.) then the athlons until the athlon x2 came about.
  4. Best answer
    When Intel was designing Nehalem and later CPUs, they had 2 goals in mind - boost single or low-threaded apps performance, and boost multithreaded apps performance, using the principle that if something didn't boost performance by at least twice as much as it cost in power or die area increase, it didn't make the grade into the design. So for the first part, Intel used Turboboost to speed up the clocks on the 1 or 2 cores that were actually doing work, and downclocking or turning off the other cores not busy with threads.

    For HT, Intel used something under 5% extra die space to duplicate registers and other core resources so that a core could switch from one thread to another and thus appear to be 2 cores (logical cores). According to Intel this would yield up to 30% performance increase if the first thread had a certain percentage of free clock cycles (i.e., where that thread wasn't actually doing anything except waiting on some other thread input). IOW, instead of that high-powered core slacking off and waiting, Intel decided to put it to use and have it switch to another thread.

    Where HT doesn't work or actually decreases performance is when the first thread doesn't have any - or many - free clock cycles to make use of. Hence if you force the core to switch to another thread anyway, the first one is going to slow down. And if the second thread is also a heavy thread (no free clocks) then it too won't see as much benefit as it would if it ran on its own physical core.

    So with Nehalem and later designs, Intel tried to cover both ends of the thread spectrum - single thread clock boost and multiple thread core boost.
  5. Thanks for the info - that clears up a lot of things. I guess I'll be leaving hyperthreading enabled.
  6. Best answer selected by ulillillia.
Ask a new question

Read More

CPUs Core Processors