Sign in with
Sign up | Sign in
Your question

Why was the P4 made?

Tags:
  • CPUs
  • Intel
Last response: in CPUs
Share
April 10, 2004 1:13:56 AM

First off, this IS NOT a flamebait post...please read the following....

The success of Dothan has made me think why Intel made it neccessary to make the P4 in the first place. Why didnt they just add the SSE2 and later the HT to the P3 core and do some changes? It seems that clock for clock, the P3 architechture was pretty damn good. Was it created just becasue of the GHZ war? Please fill me in, as I know many of you here know much more than I do about these things :) 
Im just curious...

"I speak as neither an AMD or Intel fanboy, just giving the facts."

More about : made

April 10, 2004 8:55:00 AM

I dont think it was a bad move by Intel. remener how P3 was stuck at 1000mhz on 0.18 micron process... well Pentium 4 achived 2Ghz on the same process. even SSE 2 and HT wouldn't have made P3 touch P4 or AXP which reached 1.73 ghz on that process. the Pentium 3 is very cool but at the same time speed limited. Heat isn't the only thing limiting speed.

even the much improved banias runs only at 1.6Ghz. and while we dont know how fast it can run. I think it enhritets the same limited clock speed of the Pentium 3. while Pentium 4 easily runs 3.4 Ghz on the same process technoligy.
and we also know from regular 0.13 Pentium 3 (Which were made for the server market) that they didn't overclock to a point they would come close to P4 on the same process. (altough they kicked P4 on 0.18 which was the only P4 avlible at that time).


This post is best viewed with common sense enabled<P ID="edit"><FONT SIZE=-1><EM>Edited by iiB on 04/10/04 11:58 AM.</EM></FONT></P>
Related resources
April 10, 2004 11:24:53 AM

Pretty much agree with you. It will be interesting to see how future Dothan based desktop chips clock and perform, but IMHO it will be dificult to reach current high end CPU's performance levels and clock speeds beyond 2 GHz (which IMHO will be required to catch K8 and P4).

Otoh, Banias/Dothan has one very big advantage over K8 and P4 you seem to overlook: an incredibly small die size. The core is really, really tiny, if I'm not mistaken (going by memory here), even with the 1 Mb cache, Banias is some 85²mm on .13. Northwood with the same ammount of L2 would be roughly twice as big.

Combine this with the very low power requirements, and you have pretty much the ideal core to do a multicore implementation. Looking in my chrystal ball, I don't see single core dothan based chips overtaking netburst or K8 chips in raw performance, but a multicore chip could be very competitive within a given diesize and power enveloppe. Hyperthreading is helping to pave the way for software to take advantage of it too.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
April 10, 2004 3:19:14 PM

I agree. Hyperthreading is a really good way to introduce multithreaded programming, and this is really the software that will also take quite an advantage in the move to multicore-CPUs.

So actually, Hyperthreading stands to the multicore-transition like AMD64 stands to the 64-bit transition: it's a way to smooth things out. Going multicore would only bring exceptional costs with little short-term benefit if it weren't for multithreaded software... Which is now being developed, even if it's far from effortless, because of the introduction of HT. And going 64-bit without a solid 32-bit implementation isn't desirable either (***cough*** itanium ***cough***)!

What I think is a little annoying is that sometimes programmers complain that HT requires reprogramming, and AMD64 just recompiling. Well, ... At some point, it will become necessary to use properly threaded software just to exploit the hardware you've got! So the programmers should get used to the idea...

Multi-core is at least as promising as 64-bit, anyway. It's still a little further down the road, though.

<i><font color=red>You never change the existing reality by fighting it. Instead, create a new model that makes the old one obsolete</font color=red> - Buckminster Fuller </i><P ID="edit"><FONT SIZE=-1><EM>Edited by Mephistopheles on 04/10/04 02:32 PM.</EM></FONT></P>
April 10, 2004 3:39:24 PM

probably the p3 core , on its single data rate bus @ 133 could handle HT? perhaps they didnt even have the paper work for HT back then?


Intel probably saw the P4 chip as something they could take up to insane clock speeds .. and they know that big numbers are what stupid consumers will buy, no matter what the performance is


when i say consumers, i mean your average 45 year old mom who wants to chat on msn to her daughter overseas, who also uses her computer for windows-multimedia applications like movies and flash animations ....not people like us, who mostly play FPU intensive 3d games


and really, the P4 has achieved that.. selling more chips.

-------
<A HREF="http://www.albinoblacksheep.com/flash/you.html" target="_new">please dont click here! </A>
April 10, 2004 5:31:26 PM

>What I think is a little annoying is that sometimes
>programmers complain that HT requires reprogramming, and
>AMD64 just recompiling.

That is fairly logical. recompiling really is just that (assuming you made your code 64 bit code safe, if not, blame yourselve, but it still shouldnt be much work). Creating multithreaded apps is simply in a completely different league. You have to rethink and redo your software down to the algorithm level. That means: throw everything away and start over for most things. Lot of computational heavy problems are also simply hard if not impossible to parallellize because the problem is linear in its very nature. Maybe someday compilers will be smarter than developpers, and do this automatically, but we're not quite there yet.

>Multi-core is at least as promising as 64-bit, anyway. It's
>still a little further down the road, though.

You can't compare them. 64 bit is just something we'll have to adopt at some point to overcome limitations that become increasingly limiting. Its relatively straightforward process as well, just change the compilation flags, and you're done, but either way, in its inevitable at some point. Multicore OTOH is just another way to increase performance of a cpu. If other ways would be found to extract the same performance for a given transistor count/diesize/power envelope, no one would miss it, its not as inevitable.

Either way, I look forward to both.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
April 10, 2004 5:43:27 PM

Quote:
Lot of computational heavy problems are also simply hard if not impossible to parallellize because the problem is linear in its very nature.

Yes, indeed, but I also think that there are computationally heavy problems that are more easily parallelizable and that that is the logical course of action. Like graphics processors; no single processor could be built to handle them. Instead, current video cards use parallel pipelines.

At some point, a smarter approach than making a single processing unit built the way it is will have to be created; I don't know if this will indeed be multicore, but multicore is a step forward. Maybe they'll enhance the interprocessor communication to a point where it all gets much smoother; then we'd have dual- or quad-cored processors so much blended together that they'd almost act as a single unit and compilers so perfectly tuned for them that parallelization would require less effort. Hardware enhancements might bring about a truly great platform, if they manage this.

If you consider multitasking and some multithreaded applications, the performance increases you can actually get right now with current cores in multicore configurations is very much higher than that of perfecting the individual cores or even making new ones. Unless, of course, you create some sort of supercore... Which wouldn't be easy to design, mind you...

<i><font color=red>You never change the existing reality by fighting it. Instead, create a new model that makes the old one obsolete</font color=red> - Buckminster Fuller </i>
April 10, 2004 6:06:50 PM

thats a good point! its nice to see Intel and AMD do improvements to the whole processing platform and not just keep ramping up the GHZ. Im curious too how the Dothan will perform in the future. One little bit to add, is'nt it strange that with the new Intel numbering, the P4EE and the Dothan both are in the highest category. Strange place for a low GHZ chip thats mostly based off the P3 design...PLUS, I find it interesting that the Celeron based off the Dothan is much superior to the desktop-variety celeron...hmm...

"I speak as neither an AMD or Intel fanboy, just giving the facts."
April 10, 2004 6:29:13 PM

>Yes, indeed, but I also think that there are
>computationally heavy problems that are more easily
>parallelizable and that that is the logical course of
>action

But there is a problem here. Some apps would benefit hugely from dual or quad cores, others not at all. Would you be willing to accept a performance drop beyond even todays CPU's for those problems/apps that are (nearly) not parallizable ? I'm sure that depends on what apps, but even today, precious few people would prefer a dual 1.2 Ghz machine over a single 2 Ghz one, even if cost wasnt the issue. Few apps would benefit either, so if you are to compromise single threaded performance to enable multicore, its going to be a tough sell.

Case in point, I expect (but could obviously be wrong), multicore Dothan deratives will be slower in single threaded than single core P4/K8's, while faster (probably MUCH faster) in multithreaded apps. I think A64/Opteron holds better cards there, as I assume its the same core, so if they do not have to reduce clockfrequency (for power issues), there is no reason to assume a multicore K8 (K9) would be slower on single threaded apps.

>Maybe they'll enhance the interprocessor communication to a
>point where it all gets much smoother

I think Opteron with its HTT and ODMC is already a huge leap forward in this regard. It really hardly matters wether you are designing a single cpu board or a 8 way board, its about as easy. Sun is also working hard on this, both with their upcoming "thoughput computing" chips, and their "wireless" chipinterconnect. Imagine opteron ever getting that, no need for complex motherboards, not even traces for HTT, just put the chips next to each other to communicate at chipspeed. That would be as good as it gets really.

>and compilers so perfectly tuned for them that
>parallelization would require less effort

I'm sceptical on this, very sceptical. If you see how hard it already was to let compilers vectorize simple loops for SSE2,... multithreading is a completely different league of difficulty, its really not something a compiler could ever do IMHO. You'd have better chance of making this easier by using development tools or even languages that are better suited for that sort of development, but the bulk of the difficulty will still be upon the developper, and their skills are harder to upgrade than a cpu :) 

= The views stated herein are my personal views, and not necessarily the views of my wife. =
April 10, 2004 6:51:12 PM

>One little bit to add, is'nt it strange that with the new
>Intel numbering, the P4EE and the Dothan both are in the
>highest category

Not that strange, since both will be sold as top of the line in their intended markets. It doesnt mean Dothan will outperform the P4EE, but (in intel speak), the P4EE will be the ultimate desktop chip, and Dothan the ultimate notebook chip.

>PLUS, I find it interesting that the Celeron based off the
>Dothan is much superior to the desktop-variety celeron...
>hmm...

Not sure what you're saying here, but I am fairly confident a Dothan based Celeron will be a FAR more desirable chip than any P4 based celeron. I really hope intel drops the latter, which is an increbile POS chip, and replaces it with a Dothan/banias design. Its smaller, and therefore cheaper to produce, as well as cooler, and most likely, offers far better performance. It should also be pin compatible. The only downside to this strategy is that they will no longer be able to recover faulty P4's (which is not such a big issue I think), but more dangerous is that notebook designers would smart enough to use desktop dothan based celerons, and use them instead of the much more expensive mobile Dothans. Disabling Speedstep and upping vCore will help to reduce this threat, but certainly not eliminate it.

= The views stated herein are my personal views, and not necessarily the views of my wife. =
April 10, 2004 6:54:04 PM

What I meant about the Dothan numbering system was that it seems to be a "superior" product over the P4, which is numbered in a lower-class status. thats what I was pointing out, not comparing it to the EE, sorry for the misunderstanding

"I speak as neither an AMD or Intel fanboy, just giving the facts."
!