I don't "think" it has anythng to do with patents, but rather the design of the chips them selves.
The Pentium IV (RIP) had hyperthreating (or SMP by another name) but the real word benefit was small to non-existant in anything other than a very small number of server applications.
The reason there was no real word gain was the PIV despite having a fabulously long 20 (or in Prescott 31) stage pipeline it could only actually retire 2 instructions per clock cycle, so no matter how full the pipeline was stuffed with instructions the absolute 2 instructions per clock cycyle barrier could never be broken.
The Nehalem design is different -it can retire 4 (and in some rare cases where there has been micro-op fusion) 5 instructions per clock cycle. It is quite rare for the Nehalem pipeline to be fully loaded so that a single thread has a full 4 instructions per cycle to retire, so there is extra room to run a seperate thread down the same core to use up some of the excess capacity to retire instructions.
The Phenom core is roughly in between the PIV and Nehalem - it can generally retire 3 instructions per clock cycle. (Intel and AMD use different micro-oops so a "direct" apples to apples comparison is a bit tricky) Because of this it does not have the same ability to actually do useful work on a second thread like Nehalem can, so AMD likely has not implemented the feature becuase it would do them very little good.
^ Good explanation, Vorlon. Hadn't really considered it from that perspective, but it makes a load of sense. IIRC, SMT can gain anywhwere from slightly negative to +30% in performance, depending on the code being executed, at around 5% extra transistor budget.
Just as with 'native' vs. MCM, much of AMD's disparagement of SMT has more to do with marketing than the facts. And just as AMD's Magny Cours will be MCM, I suspect they will get around eventually to using hyperthreading as well, when they are able .
IBM invented smt in 1968 and intel coined it hyper threading. AMD is working on building cpus with up to 12 cores by sometime in '10 and 16 cores in '11 and then they will work on smt for '12 I think they should be working on smt now as software can use it, unless they can produce a 12 core cpu that still only uses 125w???
I completely disagree with those that said there was no real-world benefit from hyper-threading. On paper, that's true. But in reality, I can't run Windows 7 smoothly on a Pentium 4 3.0GHz 64-bit with NO Hyper-threading, however, the computers with a 2.8GHz Pentium 4 WITH Hyper-threading run as smooth as silk.
I completely understand your concern because hyper-threading really DOES make a difference. Any of you with those old Dell Dimension 2400s should go to the BIOS and enable Hyper-threading. After running some programs, you'll see that this thing can actually multi-task at a rate that exceeds the average computer user's ability to multi-task.
EDIT: You must also replace thermal paste on both the CPU and Chipset to notice the difference. Stock thermal paste is always crap!!!! Talking from experience with 500+ thermal jobs and counting.
Hyperthreading is basically nothing but an extra set of Registers. Period.
As a result, there are some workloads that do not benifit from hyperthreading because the resources those workloads need [ALU, FPU, etc] are being used by the "true" core, but certain workloads, especially those that do not need many of the cores other resources, can be significantly speed up due to having an extra set of CPU registers. Basically, how full the pipeline is determines how well a HT core performs. [Again, the Pentium 4 was basically a X86 MIPS chip; a LOT of MIPS principles, such as focus on keeping the pipelien full and high clocks, are seen in the Pentium 4 design]
Hyperthreading basically requires an extra register state, so its uses vitial die space to create only part of a core, but its cheaper to implement then a full core. Its a cost-benifit question whether to support some form of HT.