Sign in with
Sign up | Sign in
Your question

HyperThreading & Multiple CPU`s

Tags:
Last response: in CPUs
Share
January 8, 2003 1:23:54 AM

HyperThreading & Multiple CPU`s- Discuss.

I am interested and looking forward to being educated on the benefits that multi-CPU users might obtain on the advent of HyperThreading technology. My main question is will HyperThreaded code be able to distinguish the virtual CPU from an actual extra CPU. For many years I have been confused why multi-CPU systems haven’t taken off- rather we are prepared to cook semiconductors in an effort to obtain 4 Ghz when a pair of 2Ghz chips would in theory be faster (plus cooler and cheaper). HyperThreading seems to be taking a step in the right direction but why create a virtual second CPU when another can be physically added?
After all, the worlds fastest computers are not single CPU entities.

Dr. D
January 8, 2003 6:50:17 AM

STOP say 2 CPU


THAT A SMT IMPLEMTATION


Now what to do??
January 8, 2003 7:19:54 AM

I'm _really_ not an expert, but I've done enough reading to know that your scenario is not correct. Coupling two 2Ghz chips would not give performance in excess of a single 4Ghz chip, and in fact would likely give poorer performance. Why is this? I think it has to do with the way a computer splits up the workload - it's impossible to split up every task perfectly/efficiently across the two chips so that they will each do half of the work etc. In fact, the number of places that this is feasible is quite limited, so you pretty much have once CPU doing a lot of work while the other sits idle. However, this is not true in all cases. I think Adobe Photoshop is the golden counterexample - it has purposely been written so that some of its filter operations can be split up across multiple processors, and it does indeed benefit from such a scenario.
Remember though, that such cases are relatively rare. I know the following is probably a major oversimplification, but imagine trying to split up an addition/multiplication problem, or maybe a matrix inversion (ubiquitous among graphics processing). Try sitting down three of your friends and ask two of them to work in a team against the third - see who can invert a 5x5 matrix first. I bet the two don't finish twice as fast, and wouldn't be surprised if they came in around the same time as the one...
The problems that supercomputers and some other "distributed" applications solve are specially written such that they can easily be tokenized and each piece sent to an autonomous processor. I think folding@home or whatever that screensaver is (you guys know what I mean? the offshoot of SETI@home...) is a perfect example of this.


Does that make sense? Please feel free to correct me, as I'm a little fuzy...
Related resources
January 8, 2003 11:28:13 AM

To gain the full benefit of HyperThreading applications will have to be coded specifically for it. Is there that big a difference between a virtual extra CPU as in HT and an actual. Obviously to perform a single mathematical operation ,such as Matrix multiplication, multi-CPU implementation is harder but my point is that HyperThreading will also have the same problem. As I see it the point of HyperThreading is to seperately deal with two distinct tasks be they within the same application or two different ones. (An example used in THG was a game where the AI and engine were dealt with distinctly by the Virtual CPU.) Isn`t this just a fudged multi-CPU which would run better with an extra physcial CPU.

Dr. D
January 8, 2003 12:02:02 PM

Quote:
My main question is will HyperThreaded code be able to distinguish the virtual CPU from an actual extra CPU

I think the idea is that HT is totally transparent to the software, so no - code will not be able to tell the difference between 1 HT CPU or 2 non-HT CPUs.


Quote:
a pair of 2Ghz chips would in theory be faster (plus cooler and cheaper)

Cooler and cheaper maybe, but definitely <i>not</i> faster. In any MP system, there is always some overhead involved, because the 2 CPUs have to have <i>some</i> method of sharing out tasks. because of this overhead, a SMP system will gain less and less from each additional processor added.

Quote:
After all, the worlds fastest computers are not single CPU entities.

No, but they've been designed that way from the beginning, are are nothing like the X86 platform. I <i>think</i> they're much more like multiple <i>computers</i> than simply multiple CPUs.


---
:smile: :tongue: :smile:
January 8, 2003 1:07:00 PM

In effect a 4 Ghz chip is not twice as fast as two 2 Ghz chips, most notably for fp calcs. Clock speed doesn`t correlate linearly with effective speed. As the chemist pointed out Adobe Photoshop is a good example. All that is required is that apps be coded to utilise multi-cpus properly. This quest for speed by simply increasing CPU speed is ludicrous and ultimatlely doomed as we reach the fundamental transistor size limit. I must point out as well that you already have ,to an extent, a dual-CPU PC system in place. The advent of dedicated graphics cards over ten years ago inititiated this. CPU`s now hand off these video calcs to your card. Similar arguments that I have heard today were dug when people proposed using dedicated g/cards to do video calcs. Would you use software mode now?

Now as you might have noticed I am playing devils advocate here to an extent. I was aware of a lot of the points being made here and appreciate the people who without a doubt have a more informed knowledge and have kindly humoured me without being dismissive. Nevertheless, my underlying point is why are we bothering with Virtual HT on CPU`s near meltdown when we could do this properly. I think Intel may live to regret opening this Pandora’s box on the X86 platform because all they have effectively done is endorsed multi-CPU`s and negated there self led belief that clock speed is the be all and end all of computing power. If someone had the balls, say Macintosh (as much as I dislike them), they could trounce the X86 systems for effective power with multi-CPU (lower clock speeds) in an architecture and OS designed for them.

Dr. D
January 8, 2003 2:24:07 PM

Well, I think that the reason why you should still use HT is because you can. It finally puts the P4's performance up to what it should have been from the beginning. I don't think that dual CPU systems are bad, but they are more expensive. Not only do you need a motherboard that supports 2 CPU's, but you need a more expensive OS. XP Home supports HT, but not two physical CPU's. You could argue that most computer enthusiasts already have XP Pro, but it is too costly to have two marketing plans... one for enthusiasts and the other for the other 90% of their buyers.

Intel does promote multi-CPU systems, just not with the P4. Buy a Xeon. The P4 was meant for desktop PC's. If you need a high performance server, you probably wouldn't choose the P4. The majority of computer buyers don't have any need of a dual CPU system... especially when single CPU's are as fast as they are.
January 9, 2003 12:40:44 AM

“Well, I think that the reason why you should still use HT is because you can. “

I didn`t say not to use HT, I said that it is basically a fudged multi-CPU solution.

“It finally puts the P4's performance up to what it should have been from the beginning. I don't think that dual CPU systems are bad, but they are more expensive. Not only do you need a motherboard that supports 2 CPU's, but you need a more expensive OS.”

More importantly it isn`t cheaper either. A 3 GHz Pentium 4 will cost you around $680 while two 2 Ghz Pentiun 4`s will cost you $360 (2x$180). If you test the HT 3 Ghz vs a dual CPU system at 2x2 Ghz doing some encoding or Adobe stuff then the dual CPU will trounce the HT single CPU. If you were to code apps like games etc in a similar way the same benefits would be presented. Read Toms article on HT and what the programmers are saying.


“XP Home supports HT, but not two physical CPU's. You could argue that most computer enthusiasts already have XP Pro, but it is too costly to have two marketing plans... one for enthusiasts and the other for the other 90% of their buyers.
Intel does promote multi-CPU systems, just not with the P4. Buy a Xeon. The P4 was meant for desktop PC's. If you need a high performance server, you probably wouldn't choose the P4. The majority of computer buyers don't have any need of a dual CPU system... especially when single CPU's are as fast as they are.”

I am a bit confused as well as to why you view this is just a "high end server technology". The very fact that HT is here is contrary to your own argument. Single X86 CPU`s are reaching their performance limit. Personally, I think this is probably an important first step towards the adoption of multi-CPU technology into mainstream computing. The next battle won`t be for Ghz but for the number of CPU`s x GHZ.
January 9, 2003 1:27:08 AM

I happened to configure dual Xeon Processor server a couple of days ago. I believe task manager under win2k server os showed 4 processors... two for each CPU. Hope this answers your question. Would expect the same from dual P4 w/ HT system.
January 9, 2003 11:20:51 AM

What is the theoretical size limit for transistors? They continue to shrink as technology is able to reduce circuit size-next up: Ultra UV lithography with shorter wavelength and finer discrimination.

Northwood is now using .13 micron core manufacturing, will be going to .09 then .065 then .032-for the 32 nm process they will use Extreme Ultra Violet lithography. Transistor gates are shrinking also.

So-who knows how small we can go, with short-wavelength lithography, improved dielectrics etc.?

<font color=red>I need to OC....just to read the posts faster!</font color=red>
January 9, 2003 11:44:05 AM

The fundamental limit is determined by the quantum limit. We haven`t got far to go in terms of perfomance increase until the distance between two tracks reaches the QM limit (and electrons can tunnel between).
January 9, 2003 1:48:53 PM

Quote:
I am interested and looking forward to being educated on the benefits that multi-CPU users might obtain on the advent of HyperThreading technology.

The answer is simple. Intel hopes that more software will become multi-threaded, and in so doing, multiple CPU systems might actually become worth the effort to the average user... in theory.

Quote:
My main question is will HyperThreaded code be able to distinguish the virtual CPU from an actual extra CPU.

I believe you mean multi-threaded code, not HyperThreaded. The point of HT is that multi-threaded code can theoretically use the full potential of the CPU. The only real 'HT-specific' coding that needs to be done is in the OS itself. As for if it can tell the difference between a real CPU and an imaginary HT one, the OS <i>should</i> allocate resources in the order of Real-Real-Imaginary-Imaginary in a Dualie HT box, so any multi-threaded code in a dualie HT box will eat up the real processors before it starts to actually use HT. The application's code itself <i>shouldn't</i> require any special knowledge of the exact processors, HT or not. That would defeat the whole purpose of HT being virtually seamless.

Quote:
For many years I have been confused why multi-CPU systems haven’t taken off- rather we are prepared to cook semiconductors in an effort to obtain 4 Ghz when a pair of 2Ghz chips would in theory be faster (plus cooler and cheaper).

No offense, but your confusion clearly stems from your lack of adequate information. Cooler, hell no. Cheaper, not by a long shot. And the answer why is monumentally simple. Writing and debugging good single-threaded code is incredibly easy. Writing and debugging good multi-threaded is one of the biggest pains in the arse that a programmer would <i>ever</i> have to face. As a result, the extreme vast majority of code is single-threaded. Since single-threaded code can only be run on one processor, that means that for the vast majority of software a 4GHz single-CPU system will kick the pants off of a dual 2GHz CPU system when that single threaded code ends up only being run on <i>one</i> of the 2GHz CPUs in the dualie box.

Further, in a direct comparison of a dualie 2GHz to a single 4GHz, the dualie 2GHz won't perform even close to twice the performance even with multi-threaded code. The overhead incurred by mutli-threading to keep threads within a process synchronized in both timing and data incurs a noticable performance penalty. Further, the 4GHz CPU will be able to utilize it's resources more effectively than two 2GHz CPUs could utilize theirs, incurring another performance penalty.

It's therefore blatantly obvious why dualie systems haven't caught on. Running the vast majority of applications, they suck compared to a single processor that is twice as fast.

And as far as us 'cooking' semiconductors, I do not believe you properly appreciate the ability of major CPU manufacturers to reduce the required voltage and keep the heat of a single CPU down. For example, the AMD Athlon 2700+ (2.16GHz) using the ThoroughbredB core puts out a maximum of 68.3W of heat, using 41.4A of power. The AMD Athlon 2100+ (1.73GHz) using the Palomino core puts out a maximum of 72W of heat, using 41.1A of power. And the AMD Athlon 1.4GHz using the Thunderbird code puts out a maximum of 72W of heat, using 41.2A of power. So in other words, AMD has been able to raise the clock speed from 1.4GHz to 2.16GHz and <b>lower</b> the amount of heat output by the CPU.

Quote:
After all, the worlds fastest computers are not single CPU entities.

Only because there are maximums on the clock speeds that a CPU core can be pushed to. Which would you rather have for a super computer: a single 3.06GHz P4 (the maximum speed currently available) or a cluster of 2.8GHz Xeons? Come on.

Quote:
To gain the full benefit of HyperThreading applications will have to be coded specifically for it.

You're very wrong here. The OS, the software that distributes access to the CPUs, does in fact require some special coding to fully utilize HT. Applications however do not. Some <i>may</i> benefit from trying to code specifically for it. However, since boxes with HT will be an incredible minority for at least another year, if not five, then software engineers would have to be nuts to try and specifically optimize for HT. (Especially when that same time spent optimizing code in other ways would yield much more of a performance improvement and be a universal benefit to all x86 CPUs.) So for now just coding generic multi-threaded code should be more than adequate to utilize HT. However, again, as multi-threaded code is much more complicated than single-threaded code the chances of a majority of applications being multi-threaded in the neat future are slim to none.

Quote:
Obviously to perform a single mathematical operation ,such as Matrix multiplication, multi-CPU implementation is harder but my point is that HyperThreading will also have the same problem.

You couldn't be more wrong. An HT-enabled CPU doesn't require code to be multi-threaded to fully utilize the CPU. A dualie system does. So a single-threaded matrix mult will run at full speed on an HT-enabled single-CPU box. Where as a single-threaded matrix mult will only run on one processor in a dualie system. HT-enabled CPUs can run single-threaded software just the same as any non-HT CPU. They merely have the added advantage of pretending to have a second CPU to better utilize resources when running multi-threaded apps.

Quote:
As I see it the point of HyperThreading is to seperately deal with two distinct tasks be they within the same application or two different ones.

You're partially right, except that you must further clarify that to the point of HT is to seperately deal with two distinct tasks that do not fully utilize the CPU individually. Any software that can fully utilize the CPU in it's own thread gains little to nothing from HT.

Quote:
In effect a 4 Ghz chip is not twice as fast as two 2 Ghz chips, most notably for fp calcs. Clock speed doesn`t correlate linearly with effective speed.

Just as a system with two 2GHz CPUs is <i>not</i> twice as fast as a single 2GHz CPU system.

Quote:
This quest for speed by simply increasing CPU speed is ludicrous and ultimatlely doomed as we reach the fundamental transistor size limit. I must point out as well that you already have ,to an extent, a dual-CPU PC system in place. The advent of dedicated graphics cards over ten years ago inititiated this.

This is ludicrous logic. First of all whatever fundamental transistor size limit may exist is far from having been reached yet. Second of all, the GPU (dedicated video processor) is a highly specialized processor designed specifically to handle graphics processing only. Just try to use it to process sorting a doubly-linked list and see how far it gets you. It is in no way comparable to having an actual second CPU. Besides, if you wanted to get so innane we could also count the IDE controllers, the sound card, the LAN controller, etc. ad nausium. They're <b>all</b> specialized logic chips designed to handle specific tasks only.

Quote:
Nevertheless, my underlying point is why are we bothering with Virtual HT on CPU`s near meltdown when we could do this properly

Simply because of the commonality of single-threaded software compared to multi-threaded software and the highly unlikely event that multi-threaded software will become the majority anytime soon. Further, near meltdown is a laughable point of view considering that modern CPUs are putting out the same heat or less than older CPUs of similar design even though they are at vastly higher clock speeds thanks to improved cores.

Quote:
I think Intel may live to regret opening this Pandora’s box on the X86 platform because all they have effectively done is endorsed multi-CPU`s and negated there self led belief that clock speed is the be all and end all of computing power.

Intel has never claimed that clock speed is the be all and end all of computing power. That is a myth invented by people disgruntled with Intel's move to a more scalable Pentium 4 processor. Further, of course Intel is endorsing a move to multi-processor systems. Why sell just one processor per box when you can sell two or four? I'd bet Intel is drooling at the idea of making multi-threaded applications the new defacto standard of software engineering. <b>THAT</b> is the point of HT.

Quote:
If someone had the balls, say Macintosh (as much as I dislike them), they could trounce the X86 systems for effective power with multi-CPU (lower clock speeds) in an architecture and OS designed for them.

Appearantly you don't know much about Apple, for they <b>already tried to do that with Macs and failed miserably</b>.

Quote:
More importantly it isn`t cheaper either. A 3 GHz Pentium 4 will cost you around $680 while two 2 Ghz Pentiun 4`s will cost you $360 (2x$180). If you test the HT 3 Ghz vs a dual CPU system at 2x2 Ghz doing some encoding or Adobe stuff then the dual CPU will trounce the HT single CPU.

How many things are wrong with this? Let me count the ways...

First of all, looking at CPU price alone is in no way indicative of a price comparison. Let's look at this more properly:

<font color=red>Retail SuperMicro 860 chipset dual Xeon mobo = $343
Retail Xeon 2GHz with 400MHz FSB = $212
Retail Xeon 2GHz with 400MHz FSB = $212
Samsung 512MB PC800 40ns ECC RDRAM RIMM = $223
Samsung 512MB PC800 40ns ECC RDRAM RIMM = $223
Samsung 512MB PC800 40ns ECC RDRAM RIMM = $223
Samsung 512MB PC800 40ns ECC RDRAM RIMM = $223
Antec 550W Power Supply = $102
----------
Total for dualie Xeon components = $1801</font color=red>

<font color=green>Retail DFI 850E chipset single P4 mobo = $107
Retail P4 3.06GHz with 533MHz FSB = $632
Samsung 512MB PC1066 32ns noECC RDRAM RIMM = $251
Samsung 512MB PC1066 32ns noECC RDRAM RIMM = $251
Samsung 512MB PC1066 32ns noECC RDRAM RIMM = $251
Samsung 512MB PC1066 32ns noECC RDRAM RIMM = $251
Antec 400W Power Supply = $60
----------
Total for single 3.06GHz components = $1803</font color=green>

Now these systems are both at about the same price. (A whole $2 difference.) Both have 2GB of RAM. The dualie is a theoretical 4GHz box. The single is a 3GHz box. Only in rare (and usually very expensive) software like Adobe will you ever see the dualie box actually perform better than the single CPU box, even with it's theoretical 1GHz lead. For the vast majority of applications, the single CPU box will totally rape the dualie box. And what do you know, they both cost the same to put together.

Hmm, do I want a computer that will kick arse in <b>all</b> software, or do I want a computer that will only kick arse in very particular software and usually suck at most other software? That really all depends on what software I use the majority of the time. Which is why workstations and servers are rare animals even in big businesses. (And virtually non-existant in SOHO use.)

Quote:
Read Toms article on HT and what the programmers are saying.

No offense to THG, but those are some pretty bad articles. Why not ask a <b>real</b> programmer instead to find out what programmers are saying? Hmm, well what do you know? I'm one and have been for years. What am I saying? Read the above! What is every programmer that I know saying? (And I know plenty across the whole US thanks to the diverse hometowns of people I met while in the Air Force.) They're saying the same things. (Only not always as politely. The military seems to have a way of teaching people to curse like sailors.)

Quote:
I am a bit confused as well as to why you view this is just a "high end server technology". The very fact that HT is here is contrary to your own argument.

Are you kidding or just that ignorant? Look at the price for a 3.06GHz P4, which is the <b>only</b> P4 to officially support HT. Meanwhile, HT has been in Xeons since Xeons were derived from the P4 core, which is a considerable amount of time from a PC-centric point of view. HT <b>is</b> just a "high end server technology" or a high end workstation technology. (Or a toy for the exceedingly wealthy. They always get the coolest stuff.)

Quote:
Single X86 CPU`s are reaching their performance limit.

Yeah, and 64-bit processing is the inherant limit of all CPUs.

No offense rgbrgb2001, but you seem to have an incredibly limited misunderstanding of what you're talking about. Go out and read up or give up. That is, unless you want to continue sounding like a fool.


PC Repair-Vol 1:Getting To Know Your PC.
PC Repair-Vol 2:Troubleshooting Your PC.
PC Repair-Vol 3:Having Trouble Troubleshooting Your PC?
PC Repair-Vol 4:Having Trouble Shooting Your PC?
January 9, 2003 2:18:26 PM

Thanks it`s good to know that I have "an incredibly limited misunderstanding". Unlike yourself who has an incredibly unlimited misunderstanding, possibly picked up in the military along with your aggression.
January 16, 2003 10:26:11 AM

ok am i wrong?
dual cpu 2Ghz isnt as fast as a single 3Ghz...
you say that if a program is single-threaded, a single cpu box whips a dual box... but if i play unreal and uses gamevoice at the same time thats two threads... plus windows uses a couple of threads for it self.... my guess is that the single cpu has more performence loss than the dual in this case... (swaping between threads.....)
my computer has at the moment 400 threads in use?
January 16, 2003 1:37:41 PM

Quote:
dual cpu 2Ghz isnt as fast as a single 3Ghz...

For the most part. Particular situations can go the other way, but in general that's true.

Quote:
you say that if a program is single-threaded, a single cpu box whips a dual box...

Very much so, assuming that the total MHz is approximately the same.

Quote:
but if i play unreal and uses gamevoice at the same time thats two threads... plus windows uses a couple of threads for it self.... my guess is that the single cpu has more performence loss than the dual in this case...

Unless you have seriously misconfigured your PC or are running a vast number of threads that don't really make sense to run, the dualie system will still get munched. Most Windows threads use minimal resources. Gamevoice certainly can't use all that much itself. So in our 2GHz dualie box at best Processor1 is running Windows, misc. backround tasks, and Gamevoice. Processor2 is running the game. So the game runs at most at 2GHz.

In our single CPU 3GHz box, Windows, misc. background tasks, and Gamevoice eat up a whopping 25% of the CPU cycles. (And we're being exceedingly generous here. Chances are it's more between 3% and 10%) That leaves the other 75% of the cycles to the game, or approximately 2.25GHz of the 3GHz's processing power. (2.7GHz at 10%) So right there the game is running considerably better on the 3GHz box than on the 2GHz box.

But then add HT into it, since the 3GHz box <i>is</i> running with HT enabled. Now you're looking at an even better management of resources, which means that much more performance on the 3GHz box.

The <b>only</b> time that an actual multiple CPU configuration makes sense is when you either run multi-threaded apps regularly or when you run several major CPU-suckers simultaneously regularly. The vast majority of home PC users will <i>never</i> see more than a minimal benefit from a second processor. Where as they'll see a <i>major</i> benefit from their single processor running 1.5 to 2 times as fast.

Some power users may lean the other way. For them dualie boxes make more sense. However, those power users are very few and far between.

Quote:
my computer has at the moment 400 threads in use?

If so, you're running an awful lot of crap. I've got a million and one programs running and a ton of background tasks and I only have 350 threads in use between 45 processes. And even then, it's only taking up about 12% of my CPU on a 750MHz P3. On a 3GHz HT-enabled P4 it'd probably be more like 3%, and again, that's a crap load of things running, which is <i>far more</i> than what most people would have running while playing a game.


PC Repair-Vol 1:Getting To Know Your PC.
PC Repair-Vol 2:Troubleshooting Your PC.
PC Repair-Vol 3:Having Trouble Troubleshooting Your PC?
PC Repair-Vol 4:Having Trouble Shooting Your PC?
January 16, 2003 2:11:28 PM

Quote:
Thanks it`s good to know that I have "an incredibly limited misunderstanding". Unlike yourself who has an incredibly unlimited misunderstanding, possibly picked up in the military along with your aggression.

rgbrgb2001, your ignorance on this subject clearly knows no bounds if your posts are in any way an adequate observation of the expression of your knowledge. If you were to have done <i>any</i> research on the subject, you would have already found the questions and counterpoints to your extremely flawed reasoning, because your thinking is <i>years</i> old and in that time much has been done.

I cannot help your ignorance. I can however point it out so that you <i>can</i> go and research and <i>learn</i> where your misunderstandings stem from and what makes them wrong. If you choose to be insulted by my observations of your inadiquacy to converse on this subject, then be my guest. Know this however, I honestly don't give one whit about you personally.

If you read my posts you'll find that I hardly ever say to anyone "You're a complete f-ing moron." or directly insult anyone personally. Why? Because I couldn't be bothered to take the time and effort to care about anyone personally. I will however make observations based on people's posts. I simply can't help it if you made such abundant observations of your ignorance blatantly obvious.


PC Repair-Vol 1:Getting To Know Your PC.
PC Repair-Vol 2:Troubleshooting Your PC.
PC Repair-Vol 3:Having Trouble Troubleshooting Your PC?
PC Repair-Vol 4:Having Trouble Shooting Your PC?
January 16, 2003 2:24:17 PM

tnx for a good answer!
so the cpu dosen´t get "fragmented" in its work while running windows and apps....?

f.ex a box has a harddisk with windows and a game, you start the game and some programs (many threds) and another box has windows on one and games on a second (2*single thread) you start a game and some programs... i have the second setup and its faster....or thats no problem until the cpu is at 100%?
the single cpu has to halt all but one program at a time at 100%?

many games claim 100% cpu even in the menus of the game.... then theres nothing left for win and gamevoice...

therefore i thought that 2Ghz is enough for the game and windows + apps has 2ghz...

4 svchost.exe uses 100 threads on my comp...?
January 16, 2003 3:18:23 PM

GOOD LORD! Give us enough info there slvr_phoenix? My ? is why havent they actually upped the FSB on all our systems.. no matter how fast of a cpu you have its still running at 100mhz or 133 mhz on the board.(166 as well) A 533FSB P4 is still just 133x4. So where is our bandwidth we really need??
January 16, 2003 3:35:04 PM

Quote:
so the cpu dosen´t get "fragmented" in its work while running windows and apps....?

It depends on the applications being run. Most background applications (and gamevoice I would assume, since it <i>is</i> designed to be run with games) use very little resources from the CPU. So they don't take away much from when the CPU is running an intensive application like a game. For users who mostly do things like this, a single-CPU box makes more sense.

Some other programs on the other hand can eat up significantly more of the CPU resources than background applications. So if you were running, say, a DVD to DivX rip and conversion <i>while</i> playing a game, then the CPU's resources would get badly chunked. In cases like this, a dualie box would be much better as it would allow the two main threads to have most of a single CPU's resources without interfering with each other.

For people who just listen to MP3 files while they surf the web and copy text into Word documents for their essay, no one will care if it's a single CPU or dualie box because even just one processor will never be fully utilized.

Quote:
f.ex a box has a harddisk with windows and a game, you start the game and some programs (many threds) and another box has windows on one and games on a second (2*single thread) you start a game and some programs... i have the second setup and its faster....or thats no problem until the cpu is at 100%?
the single cpu has to halt all but one program at a time at 100%?

Even if an application is using '100%' of the CPU, this doesn't halt any programs. Windows merely schedules the programs to use the CPU's resources when they can, making sure that each application of the same priority gets the same chance to use the CPU. It may slow down programs that want to use a lot of resources, but it won't halt anything.

So a game with a number of background tasks and other small programs running might end up being run at 90 percent (give or take) of the FPS that it would have run without those background tasks. It all depends on just how much those other programs need from the CPU. That's the flaw of a single-CPU system, is that however many other programs you have running will detract from your main application's (game's) performance. So when playing games you generally try to close as many programs that are running as possible to free up as much of the CPU for the game as possible.

A dualie box running a single-threaded game on the other hand can only run that game on one CPU. So in a dual 2GHz box, a single-threaded game can only use 2GHz of the system's total 4GHz. So if you have only five other threads running, or five hundred other threads running, unless the total CPU usage of those threads exceeds the total CPU usage of the game, that game will always have 100% of it's one CPU.

So that system can still run tons of other programs while also running that game before that game will ever get slowed down. However, that game is only running at half of the total speed of the computer in the first place.

So which method is better depends entirely on how much of the CPU your other programs want to use while you are playing your game.

Quote:
many games claim 100% cpu even in the menus of the game.... then theres nothing left for win and gamevoice...

Windows manages what processes get access to the CPU's resources, so that they all get an equal share. So if your game runs at 100%, that's 100% of what Windows says is available, not 100% of the CPU itself. So say you have five games running at the same time that all want to use 100%. Windows splits up access between those five, so that each is actually getting only 20%.

(Side Note: Of course, processes and threads can be assigned priority levels to adjust how much Windows gives them. Technically, a process can be assigned a pure 100% priority, or what is called Realtime Priority. Then Windows won't give any other process any access to the CPU. However this is rarely used as it often causes significant problems. Heh heh. For the most part, processes just always use the default priority.)

Quote:
therefore i thought that 2Ghz is enough for the game and windows + apps has 2ghz...

It surely should be enough. :)  Depending on how many apps are running at the same time as the game, a single 3Ghz CPU may be faster. However, the dualie box can run more apps at the same time as the single CPU box before applications start getting less of the CPU than they want whenever they need access to the CPU's resources. And any multi-threaded game (I believe the Quake3 engine and all games based on it are multi-threaded) will run faster on the dualie box than on a single 3GHz box because multi-threaded applications can distribute their threads onto mutiple processors, where a single-threaded application can't.

Quote:
4 svchost.exe uses 100 threads on my comp...?

That sounds like an awful lot to me. I've got two running, using a total of 34 threads. SVCHost is supposed to be a wrapper process to run NT services under. I have what I thought was a rather high number of services that get loaded on startup. (My company writes several which I have loaded.) You must have a crap load of services loading on startup to have it using that many threads. :) 


PC Repair-Vol 1:Getting To Know Your PC.
PC Repair-Vol 2:Troubleshooting Your PC.
PC Repair-Vol 3:Having Trouble Troubleshooting Your PC?
PC Repair-Vol 4:Having Trouble Shooting Your PC?
January 16, 2003 3:46:30 PM

Quote:
GOOD LORD! Give us enough info there slvr_phoenix?

I aim to over-verbalize. :) 

Quote:
My ? is why havent they actually upped the FSB on all our systems.. no matter how fast of a cpu you have its still running at 100mhz or 133 mhz on the board.(166 as well) A 533FSB P4 is still just 133x4. So where is our bandwidth we really need??

They have been upping things. Yes, 533 is just 133x4, not a true 533. However, it's x4 because four times as much data (four times the bandwidth) is sent in that single 133MHz cycle.

So as you see, they have been increasing the bandwidth without actually increasing the physical FSB speed. :) 

However, rumor has it that Barton is supposed to go up to a 200MHz FSB. And we all know that the next gen Prescott and Athlon64(ClawHammer) will be raising the FSB speeds as well. (The exact speed raise is foggy though ever since marketting started calling a 2x100 a 200 and a 4x133 a 533.)

So the bandwidth that you really need <i>is</i> scheduled to be on the way. :) 


PC Repair-Vol 1:Getting To Know Your PC.
PC Repair-Vol 2:Troubleshooting Your PC.
PC Repair-Vol 3:Having Trouble Troubleshooting Your PC?
PC Repair-Vol 4:Having Trouble Shooting Your PC?
January 17, 2003 5:31:01 PM

You give very good answers!

whats the differences between p4 slot 478 and xeon slot int-mpga (fcbga)?
They both have 533 busspeed and costs about the same....
The xeon has hl support?

whats the best buy?
p4,p4 xeon,p4 with hl?
January 17, 2003 6:20:12 PM

>>>why havent they actually upped the FSB on all our systems

Stay tuned for new chipsets coming soon to a computer near you... <A HREF="http://www.anandtech.com/showdoc.html?i=1752&p=2" target="_new">http://www.anandtech.com/showdoc.html?i=1752&p=2&lt;/A>

One more note - I recall reading years ago that the motherboard acts sort of like an antenna. And that the faster the FSB, the more RF energy the MB radiates. Think TV and radio interference here. I'm not sure at what FSB that RF interference really gets to be a problem??? But that could be one factor limiting FSB increases.

* Not speaking for Intel Corp *
January 17, 2003 7:47:33 PM

Quote:
whats the differences between p4 slot 478 and xeon slot int-mpga (fcbga)?
They both have 533 busspeed and costs about the same....
The xeon has hl support?

The Pentium 4 comes pretty standard now in a Socket 478 setup. (The first P4s were in a Socket 423.) It just refers to the packaging of the chip to match it up to the socket that it goes into, which has 478 pins.

I'd imagine that "int-mpga" is just an abreviation of Intel mPGA (micro Pin Grid Array), which is again just a reference to the socket 478. (Although technically it refers to any socket with a micro pin grid array.)

And fcbga stands for flip-chip ball grid array. Flip chip is a reference to Intel's flipping the die to face outward on the silicon so that you can plop a heat sink directly on it. Ball grid array is a lot like a pin grid array, but instead of a bunch of small metal pins, you have even smaller balls. I think it's for a shorter height and thus less electricity that has to run through the motherboard's socket to the CPU, giving better electrical signals and less heat or some such concept.

Anywho, the point is that the differences are all just in the packaging of the CPU and the socket on the motherboard that the CPU sits in. Pentium 4s use a Socket 478 flip-chip <b>pin</b> grid array. Xeons based on the P4 use a flip-chip <b>ball</b> grid array.

Or something like that.

As far as the actual differences between a Xeon and a regular P4, I believe the Xeon has more pins so that it can play nicely with multiple processors and has HT by default. The only P4 to officially have HT is the 3.06GHz.

Quote:
whats the best buy?
p4,p4 xeon,p4 with hl?

That depends on if you're getting a single or dual CPU system. P4s for single, P4 Xeons for dualie (or more). In theory the one P4 so far with HT is better than the P4s without HT, but unless you're running Windows XP, HT does you no good.


PC Repair-Vol 1:Getting To Know Your PC.
PC Repair-Vol 2:Troubleshooting Your PC.
PC Repair-Vol 3:Having Trouble Troubleshooting Your PC?
PC Repair-Vol 4:Having Trouble Shooting Your PC?
!