SidVicious

Distinguished
Jan 15, 2002
1,271
0
19,280
First off, let's talk about the very few similarities between the two CPU architectures :

They both are x86 CPUs, that's how similar they can get.

And now the differences :

They're not pin-compatible, meaning that they each require a different socket, chipset, motherboard and cooling system.

Intel's Pentium4 and Celeron CPUs currently use a 478 PGA socket that replaced the short lived 423 pins socket, P4s will eventually migrate to an LGA 775 socket that is due this summer.

AMD's Athlon, Athlon XP and Duron CPUs use a 462 PGA socket, the first Athlons required a slot instead of a socket to accomodate the slow off-die L2 cache. Athlon64 and AthlonFX respectively use 754 and 940 PGA sockets, AMD just introduced their 939 PGA platform which will replace socket 754, socket 940 will therefore be reserved for AMD Opteron CPUs which are aimed at the workstation and server market.

Intel's Pentium4 was originally designed around Rambus RDRAM and a quad-pumped bus, now, P4 motherboard feature single or dual-channel DDR-SDRAM chipsets.

AMD Athlon, Athlon XP and Duron CPUs are based on the EV6 dual-pumped bus, single-channel DDR-SDRAM is enough to fullfill their bandwidth requirement but dual-channel chipsets and motherboard are the norm.

PGA 754 Athlon64 feature an on-die single-channel memory controller and are picky about memory modules, PGA 939/940 Athlon64 and FX use an improved dual-channel on-die memory controller. Socket 940 AthlonFX and Opteron processors require expensive and rare Registered DDR-SDRAM memory, CPUs based on sockets 754 and 939 both use cheaper and easely available DDR-SDRAM.

Intel and AMD both use similar yet different instruction sets on top of their common x86 code, their CPUs don't handle logic operations and cached data in the same way.

Intel chose to market their CPUs based on the frequency at which they operate, AMD uses a model based rating system, PGA775 P4s will most likely be marketed by model # as well.

Both architectures tend to equally outshine the other depending on the task being performed, even a combination of unbiased benchmarks can't tell who's the best overall, nobody agrees on a benchmarking procedure anyway, it's all about PR IMHO.

I probably forgot to mention a lot of stuff but this post is getting waaaay too long for my taste (I already edited trice for typos and omissions), please refer to <A HREF="http://users.erols.com/chare/" target="_new">This Page</A> for more information about Intel's and AMD's CPUs.


I hope I answered your question properly, just reply if I did'nt !




Fok Speling Misstake
 

scottchen

Splendid
Jun 3, 2003
5,791
0
25,780
Ahh I admire you, you actually had the patience to explain, here's my explanation:

GO TO INTEL AND AMD's WEBSITES!!

<A HREF="http://forums.extremeoverclocking.com/myrig.php?do=view&id=17301" target="_new">My PC</A>
 

SidVicious

Distinguished
Jan 15, 2002
1,271
0
19,280
Bah, I just tought it was worth the time, BTW, AMD's and Intel's websites are as biased as you can get, <A HREF="http://www.google.com" target="_new">Google</A> is your friend when it comes to finding more tempered sources =)




Fok Speling Misstake
 

Snorkius

Splendid
Sep 16, 2003
3,659
0
22,780
Why is it when someone posts 'bla bla look at my config at http//www.this.incredibly.long.nonsensical.name.of.some.shite.site.with.a.bunch.of.random.numbers.at.the.end.kind.of.like.539144&page=0&view=what=&sb=5&part=1&vc=1-dgh.or.maybe.cli=7423!3.etc.etc.etc.

But you feel the need to post a link to <A HREF="http://www.google.com" target="_new"> <i>google</i>. </A>


<A HREF="http://www.google.com" target="_new"> <i>Google</i>? </A> Seriosly, name one person that does'nt know the site.
Even my dog(I don't have a dog) has it bookmarked, and it prolly does'nt even know that the world isnt flat.

Maybe it does. After all, google knows all!

<font color=blue>If I found the hidden fountain. Drank the wisdom from it's deep.
Would I have the time to save me. Would I have them both to keep...</font color=blue>
 

SidVicious

Distinguished
Jan 15, 2002
1,271
0
19,280
I'd rather err on the safe side and post a link to <A HREF="http://www.google.com" target="_new">Google</A> than to let a poor newcomer trying to find relevant info with AOL search...

Knowing that my sig does'nt lead to my 'rig config and benchies, I may as well consider changing it to a <A HREF="http://www.google.com" target="_new">Google</A> clickie =)

Really, thanks for the input bro !




Fok Speling Misstake
 

rower30

Distinguished
Dec 16, 2002
264
0
18,790
June 18, 2004


I find it amusing that Intel and AMD CPU fans form sides to an engineering argument. If we all really were engineers, what argument would there really be? Facts are facts and simple application studies can determine your best choice, and the best processor for your tasks. What “choice” is there but the right one, after all? So I say there is no AMD verses Intel war. The question is what choice should you make?

Intel and AMD depart from CPU processing most significantly by the depth of their prediction pipelines, and subsequent memory needs to fill that pipeline. Intel CPU’s, P4’s for sure, like lots of closely coupled cache memory to supply a speedy but deep prediction pipeline. Why is it speedy and why does it need so much memory? Imagine that the CPU knows that I want a glass of milk every time I eat a sandwich. Not only will the CPU/cache it fill a glass FULL of milk (lots of memory to supply ALL of the predicted milk) out of fast local memory (the fridge), but it will also do it as fast as it can at full CPU clock rate. But what happens when I don’t want either that much milk (I can’t drink 32oz!), or switch to coffee right in the middle of my sandwich? The CPU has to start a reverse course flushing process. Get rid of the milk, run back to the fridge (local memory), find coffee in the local fridge or possibly go farther to the grocery store (RAM) and run then run all the way back to the table and start pouring coffee. Not only does it waste time if I change my drinking taste, but the process can’t just stop pouring milk immediately. It has to finish what it started over time much as the last remaining stream of milk must fall through the air to the glass. Since the CPU’s branch prediction is so sure you want milk, the stream is fast and the pitcher held pretty far from the glass. It has no knowledge of your picky taste. Once it gets moving foreword, it wants to stay moving smoothly and quickly forward in a big way. No sense sloshing all that milk around! This is great if you do repetitive tasks. Eat milk with a sandwich ALL the time. CPU’s like repetition. Maybe that’s why artificial intelligence is artificial. The machine can simply fix mistakes faster than humans can recognize it! The machine really doesn’t “know” what you want at all it just predicts it.

An AMD processor is a little different in that the pipeline is small and not so deep. The CPU runs at a two-thirds or so clock rate reduction compared to an Intel CPU but does more work per clock cycle. And, the CPU doesn’t pretend to be all that smart. It meters out a little milk at a time into a smaller 8oz glass. So it doesn’t need a huge fridge to supply a large amount of milk to load from a huge local memory. It can also walks back and forth from the fridge to the dinner table much faster than the Intel part so if you CHANGE your mind, it’s very efficient at swapping milk for coffee. It has less to milk to rid itself of, it gets to the fridge faster, the fridge is smaller so coffee is easier to find, and the trip back to you is faster. With the CPU doing more work per clock cycle than an Intel part, it easily makes changes with much less need for a large expensive local memory. A littler fridge and a fast host are fine. But this is a disadvantage IF you want a LOT of milk, and the smaller fridge runs dry, and the CPU has to go to RAM memory to get more milk.

The issue at hand is do you do predictive tasks or not? Games are highly unpredictive as game speeds ramp up. Just what will you be doing and when? Only the shadow knows. When predictive tasks that need LOTS of the same data streams come into play, Intel rules. But these tasks seem less in number than unpredictive one that we use most often. Or, the differences in predictive tasks are measured in a few seconds or so and don’t really matter. Do I care if a video clip is decided in 20 or 25 seconds? I do care if my computer stutters playing games or on interactive tasks involving any eye – hand – monitor inputs. Watching a CPU crunch a set of numbers is not really a big deal if its 5-10% between the two. For me it is, anyway.

Cost is not an issue with true performance users because we pick the right CPU for the job and cost is secondary. But if you aren’t a performance user, I suggest that the short pipeline AMD structure will benefit you more often on most applications, and for less money, than an Intel part. Now that the 939 boards are out, the cost is even less than it was before with even faster memory speeds. AMD likes faster RAM because it has to use it more than an Intel part, but this is still faster than flushing out bad branch predictions to large local on die memory based on application tests.

Pride and assumed superiority can get in the way of good engineering. Why do we take sides so firmly when there is no side to take? One does this, the other that. But we will take thing “personally” which seems to mean that I should be hurt to hear the truth. Why? If I take things as simply true, or false, I learn something. If I defend a bad aspect of a CPU or any other item, I’m stuck defending a thinning circle of knowledge. I have a Mercedes. I didn’t make this car, it just owes me what it is, no more no less. I bought what it is, not what it isn’t. An expensive car, for sure, but do I tattoo a MB logo all over my body and defend it to the death? No, I don’t. I sent in a five-page letter to Mercedes and pointed out PAGES of things wrong with their car. You know what? They sent a 20-page latter back outlining 2,000 changes that will make the next car better! This is the way it is supposed to be. A good consumer who is realistic with the product, and a manufacturer who is, too. I’d have to say AMD is much better at this than Intel, or it’s customers. I’d say most people need stop cycle anti-virus enabled 64-bit computing with lower CPU costs much more than expensive Intel Extreme Edition P4 processors. I’d never buy a subzero 100 cubic foot fridge, but the P4EE seems to be just that. Like lots of cold milk?

For BOTH manufacturers, I’d say who needs hot and leaky 90nm die CPU’s verses a larger die with less leakage and maybe two parallel CPU cores that really work? The me too trap may consume AMD if it follows Intel’s lead before real low leakage dielectrics can steam the wasted power in 90nm die products. Not to mention my Intel P4 locks up at random intervals with its secondary “virtual” core enabled which speaks of great marketing, and a flaky CPU. It just isn’t a benefit when it doesn’t work ubiquitously.

Those who hang with good people get better themselves. AMD users seem more knowledgeable about their processors, and also feel comfortable admitting their weaknesses. Intel seems more like a Harley, once you paste it all over your body, most people feel kind’a stuck trying to defend what ever it is, or will become. Not only does the customer defend lack of change, the manufacturer sees no point in providing any! With enough money, you can fence it out till you HAVE to change, though. Intel and Microsoft can both use this tactic. Big companies make money, small one’s make products.

There is legitimate Intel users who also understand their CPU. But most don’t really know why they really bought their CPU. I have a P4 3.0Ghz 533MHz bus 845 chipset system that is barely better than an AMD 2800+ Barton system at 1/3 MORE money. I built the Intel system when AMD did have thermal issue to work out. But AMD has worked them out and their current crop of CPU’s are well suited for short branch prediction users. With the 939 socket boards taking either 32 or 64 bit processors with more economical and faster dual memory, it seems AMD has done well for itself.