I am interested and looking forward to being educated on the benefits that multi-CPU users might obtain on the advent of HyperThreading technology.
The answer is simple. Intel hopes that more software will become multi-threaded, and in so doing, multiple CPU systems might actually become worth the effort to the average user... in theory.
My main question is will HyperThreaded code be able to distinguish the virtual CPU from an actual extra CPU.
I believe you mean multi-threaded code, not HyperThreaded. The point of HT is that multi-threaded code can theoretically use the full potential of the CPU. The only real 'HT-specific' coding that needs to be done is in the OS itself. As for if it can tell the difference between a real CPU and an imaginary HT one, the OS <i>should</i> allocate resources in the order of Real-Real-Imaginary-Imaginary in a Dualie HT box, so any multi-threaded code in a dualie HT box will eat up the real processors before it starts to actually use HT. The application's code itself <i>shouldn't</i> require any special knowledge of the exact processors, HT or not. That would defeat the whole purpose of HT being virtually seamless.
For many years I have been confused why multi-CPU systems haven’t taken off- rather we are prepared to cook semiconductors in an effort to obtain 4 Ghz when a pair of 2Ghz chips would in theory be faster (plus cooler and cheaper).
No offense, but your confusion clearly stems from your lack of adequate information. Cooler, hell no. Cheaper, not by a long shot. And the answer why is monumentally simple. Writing and debugging good single-threaded code is incredibly easy. Writing and debugging good multi-threaded is one of the biggest pains in the arse that a programmer would <i>ever</i> have to face. As a result, the extreme vast majority of code is single-threaded. Since single-threaded code can only be run on one processor, that means that for the vast majority of software a 4GHz single-CPU system will kick the pants off of a dual 2GHz CPU system when that single threaded code ends up only being run on <i>one</i> of the 2GHz CPUs in the dualie box.
Further, in a direct comparison of a dualie 2GHz to a single 4GHz, the dualie 2GHz won't perform even close to twice the performance even with multi-threaded code. The overhead incurred by mutli-threading to keep threads within a process synchronized in both timing and data incurs a noticable performance penalty. Further, the 4GHz CPU will be able to utilize it's resources more effectively than two 2GHz CPUs could utilize theirs, incurring another performance penalty.
It's therefore blatantly obvious why dualie systems haven't caught on. Running the vast majority of applications, they suck compared to a single processor that is twice as fast.
And as far as us 'cooking' semiconductors, I do not believe you properly appreciate the ability of major CPU manufacturers to reduce the required voltage and keep the heat of a single CPU down. For example, the AMD Athlon 2700+ (2.16GHz) using the ThoroughbredB core puts out a maximum of 68.3W of heat, using 41.4A of power. The AMD Athlon 2100+ (1.73GHz) using the Palomino core puts out a maximum of 72W of heat, using 41.1A of power. And the AMD Athlon 1.4GHz using the Thunderbird code puts out a maximum of 72W of heat, using 41.2A of power. So in other words, AMD has been able to raise the clock speed from 1.4GHz to 2.16GHz and <b>lower</b> the amount of heat output by the CPU.
After all, the worlds fastest computers are not single CPU entities.
Only because there are maximums on the clock speeds that a CPU core can be pushed to. Which would you rather have for a super computer: a single 3.06GHz P4 (the maximum speed currently available) or a cluster of 2.8GHz Xeons? Come on.
To gain the full benefit of HyperThreading applications will have to be coded specifically for it.
You're very wrong here. The OS, the software that distributes access to the CPUs, does in fact require some special coding to fully utilize HT. Applications however do not. Some <i>may</i> benefit from trying to code specifically for it. However, since boxes with HT will be an incredible minority for at least another year, if not five, then software engineers would have to be nuts to try and specifically optimize for HT. (Especially when that same time spent optimizing code in other ways would yield much more of a performance improvement and be a universal benefit to all x86 CPUs.) So for now just coding generic multi-threaded code should be more than adequate to utilize HT. However, again, as multi-threaded code is much more complicated than single-threaded code the chances of a majority of applications being multi-threaded in the neat future are slim to none.
Obviously to perform a single mathematical operation ,such as Matrix multiplication, multi-CPU implementation is harder but my point is that HyperThreading will also have the same problem.
You couldn't be more wrong. An HT-enabled CPU doesn't require code to be multi-threaded to fully utilize the CPU. A dualie system does. So a single-threaded matrix mult will run at full speed on an HT-enabled single-CPU box. Where as a single-threaded matrix mult will only run on one processor in a dualie system. HT-enabled CPUs can run single-threaded software just the same as any non-HT CPU. They merely have the added advantage of pretending to have a second CPU to better utilize resources when running multi-threaded apps.
As I see it the point of HyperThreading is to seperately deal with two distinct tasks be they within the same application or two different ones.
You're partially right, except that you must further clarify that to the point of HT is to seperately deal with two distinct tasks that do not fully utilize the CPU individually. Any software that can fully utilize the CPU in it's own thread gains little to nothing from HT.
In effect a 4 Ghz chip is not twice as fast as two 2 Ghz chips, most notably for fp calcs. Clock speed doesn`t correlate linearly with effective speed.
Just as a system with two 2GHz CPUs is <i>not</i> twice as fast as a single 2GHz CPU system.
This quest for speed by simply increasing CPU speed is ludicrous and ultimatlely doomed as we reach the fundamental transistor size limit. I must point out as well that you already have ,to an extent, a dual-CPU PC system in place. The advent of dedicated graphics cards over ten years ago inititiated this.
This is ludicrous logic. First of all whatever fundamental transistor size limit may exist is far from having been reached yet. Second of all, the GPU (dedicated video processor) is a highly specialized processor designed specifically to handle graphics processing only. Just try to use it to process sorting a doubly-linked list and see how far it gets you. It is in no way comparable to having an actual second CPU. Besides, if you wanted to get so innane we could also count the IDE controllers, the sound card, the LAN controller, etc. ad nausium. They're <b>all</b> specialized logic chips designed to handle specific tasks only.
Nevertheless, my underlying point is why are we bothering with Virtual HT on CPU`s near meltdown when we could do this properly
Simply because of the commonality of single-threaded software compared to multi-threaded software and the highly unlikely event that multi-threaded software will become the majority anytime soon. Further, near meltdown is a laughable point of view considering that modern CPUs are putting out the same heat or less than older CPUs of similar design even though they are at vastly higher clock speeds thanks to improved cores.
I think Intel may live to regret opening this Pandora’s box on the X86 platform because all they have effectively done is endorsed multi-CPU`s and negated there self led belief that clock speed is the be all and end all of computing power.
Intel has never claimed that clock speed is the be all and end all of computing power. That is a myth invented by people disgruntled with Intel's move to a more scalable Pentium 4 processor. Further, of course Intel is endorsing a move to multi-processor systems. Why sell just one processor per box when you can sell two or four? I'd bet Intel is drooling at the idea of making multi-threaded applications the new defacto standard of software engineering. <b>THAT</b> is the point of HT.
If someone had the balls, say Macintosh (as much as I dislike them), they could trounce the X86 systems for effective power with multi-CPU (lower clock speeds) in an architecture and OS designed for them.
Appearantly you don't know much about Apple, for they <b>already tried to do that with Macs and failed miserably</b>.
More importantly it isn`t cheaper either. A 3 GHz Pentium 4 will cost you around $680 while two 2 Ghz Pentiun 4`s will cost you $360 (2x$180). If you test the HT 3 Ghz vs a dual CPU system at 2x2 Ghz doing some encoding or Adobe stuff then the dual CPU will trounce the HT single CPU.
How many things are wrong with this? Let me count the ways...
First of all, looking at CPU price alone is in no way indicative of a price comparison. Let's look at this more properly:
<font color=red>Retail SuperMicro 860 chipset dual Xeon mobo = $343
Retail Xeon 2GHz with 400MHz FSB = $212
Retail Xeon 2GHz with 400MHz FSB = $212
Samsung 512MB PC800 40ns ECC RDRAM RIMM = $223
Samsung 512MB PC800 40ns ECC RDRAM RIMM = $223
Samsung 512MB PC800 40ns ECC RDRAM RIMM = $223
Samsung 512MB PC800 40ns ECC RDRAM RIMM = $223
Antec 550W Power Supply = $102
----------
Total for dualie Xeon components = $1801</font color=red>
<font color=green>Retail DFI 850E chipset single P4 mobo = $107
Retail P4 3.06GHz with 533MHz FSB = $632
Samsung 512MB PC1066 32ns noECC RDRAM RIMM = $251
Samsung 512MB PC1066 32ns noECC RDRAM RIMM = $251
Samsung 512MB PC1066 32ns noECC RDRAM RIMM = $251
Samsung 512MB PC1066 32ns noECC RDRAM RIMM = $251
Antec 400W Power Supply = $60
----------
Total for single 3.06GHz components = $1803</font color=green>
Now these systems are both at about the same price. (A whole $2 difference.) Both have 2GB of RAM. The dualie is a theoretical 4GHz box. The single is a 3GHz box. Only in rare (and usually very expensive) software like Adobe will you ever see the dualie box actually perform better than the single CPU box, even with it's theoretical 1GHz lead. For the vast majority of applications, the single CPU box will totally rape the dualie box. And what do you know, they both cost the same to put together.
Hmm, do I want a computer that will kick arse in <b>all</b> software, or do I want a computer that will only kick arse in very particular software and usually suck at most other software? That really all depends on what software I use the majority of the time. Which is why workstations and servers are rare animals even in big businesses. (And virtually non-existant in SOHO use.)
Read Toms article on HT and what the programmers are saying.
No offense to THG, but those are some pretty bad articles. Why not ask a <b>real</b> programmer instead to find out what programmers are saying? Hmm, well what do you know? I'm one and have been for years. What am I saying? Read the above! What is every programmer that I know saying? (And I know plenty across the whole US thanks to the diverse hometowns of people I met while in the Air Force.) They're saying the same things. (Only not always as politely. The military seems to have a way of teaching people to curse like sailors.)
I am a bit confused as well as to why you view this is just a "high end server technology". The very fact that HT is here is contrary to your own argument.
Are you kidding or just that ignorant? Look at the price for a 3.06GHz P4, which is the <b>only</b> P4 to officially support HT. Meanwhile, HT has been in Xeons since Xeons were derived from the P4 core, which is a considerable amount of time from a PC-centric point of view. HT <b>is</b> just a "high end server technology" or a high end workstation technology. (Or a toy for the exceedingly wealthy. They always get the coolest stuff.)
Single X86 CPU`s are reaching their performance limit.
Yeah, and 64-bit processing is the inherant limit of all CPUs.
No offense rgbrgb2001, but you seem to have an incredibly limited misunderstanding of what you're talking about. Go out and read up or give up. That is, unless you want to continue sounding like a fool.
PC Repair-Vol 1:Getting To Know Your PC.
PC Repair-Vol 2:Troubleshooting Your PC.
PC Repair-Vol 3:Having Trouble Troubleshooting Your PC?
PC Repair-Vol 4:Having Trouble Shooting Your PC?