Sign in with
Sign up | Sign in
Your question

AXP 2700+ and 2800+ tested on THG

Last response: in CPUs
Share
Anonymous
a b à CPUs
October 1, 2002 12:01:51 PM

But only in German so far.. go here:
<A HREF="http://www.de.tomshardware.com/cpu/02q4/021001/index.ht..." target="_new"> Toms hardware Guide DE </A>
No, this is not a spoof this time..
PLeasantly surprised by its performance I must say.. lets hope nForce2 becomes available real fast !

= The views stated herein are my personal views, and not necessarily the views of my wife. =
October 1, 2002 3:20:04 PM

Now it's also available in english. [H]ardocp has also reported, as so many other sites.

I'm worry about KT266A won't support the 166FSB for "power regulator is not enough". What does it mean? I hope the ONLY issue is you underclock FSB to 133 ... just to overclock as near as you can to 166FSB as long as our mobos can.

Already posted in other thread, but anyone knows what the "voltage regulator issue" means? It's true?


DIY: read, buy, test, learn, reward yourself!
October 1, 2002 3:43:15 PM

I just caught THG's English version, and while I'm impressed, I have to say that it looks to me like <i>most</i> of the performance gain looks to be from the nForce2. I'd like to see a review on another motherboard to make sure though.

I mean, sure, the new Athlon 2800+ performs well. The 166MHz FSB is <b>way</b> overdue though. And the heat that this little bugger generates sets an all new record for AMD. Combine that with their all new record for the smallest die size, and you're going to need one hell of a cooler.

But then complicate that with the following:
Quote:
<font color=red>The sample processor does reveal some special aspects, however: for all of the CPUs with the Thoroughbred B core (from Athlon XP 2400+ up to XP 2800+), the thermal diode doesn't work. This means that when there's a cooler defect, the motherboard will not be able to protect the CPU. A correct measurement of the die temperature is not possible.</font color=red>


Does anyone know if any other site has verified or disproven this problem that THG mentions ever so casually? If so, that's a <b>huge</b> concern in my opinion. With the amazing heat output and incredibly small surface area of the AXP 2800+, having no more thermal protection is <b>not</b> a good thing.

<pre><A HREF="http://www.nuklearpower.com/comic/186.htm" target="_new"><font color=red>It's all relative...</font color=red></A></pre><p>
Related resources
Anonymous
a b à CPUs
October 1, 2002 4:46:14 PM

>and while I'm impressed, I have to say that it looks to
>me like most of the performance gain looks to be from the
>nForce2. I'd like to see a review on another motherboard
>to make sure though.

Go read Aces'.. they compare to a KT333 -for those tests where they could make their nforce2 work at all.. it kinda confirms your suspicion.

>And the heat that this little bugger generates sets an
>all new record for AMD. Combine that with their all new
>record for the smallest die size, and you're going to
>need one hell of a cooler.

Yep, though Im not too worried.. my Swiftech handles a 1400 Tbird with ease using a *very* slow Pabst fan. A few watts more should not be too much of a problem. How does this compare to a 2.8 Ghz P4 anyway ? Or a 3+ Ghz within a few months ? I guess 80+W is just something will have to get used to in the near future, unless SOI or .9 do some miracles.

>Does anyone know if any other site has verified or
>disproven this problem that THG mentions ever so casually?

No, not that I am aware off. I suspect its a motherboard/bios issue. Considering Aces' had quite a bit of trouble even getting his early nForce2 board to work at all, I doubt much has changed on the cpu side of things. Besides, the quoted comment isnt 100% accurate. Even if the ondie sensor would not work , a motherboard could still protect from a "cooler defect", that is a fan failure. Most motherboards use/have their own thermal sensor anyway, and should be perfectly capable of protecting the cpu from a fan failure. Booting the system without a HS is another thing, but thats old news, not gonna restart that discussion.



= The views stated herein are my personal views, and not necessarily the views of my wife. =
October 1, 2002 4:58:24 PM

Obviously, I was very impressed with the 2800+ performance. It clearly was the faster of them all, though it can mostly be attributed to the vastly superior performance of the nforce2 chipset. I was curious though as to how it would run on the nforce2 spp chip instead of the IGP they used, since for some of us G4 MX technology for graphics is rather a joke.

Is the new nforce chipset like the previous generation? That is does it only work in dual mem mode when using the IGP or will the SPP still benefit? I really wish these guys did both models for the testing to show any difference in performance, though it was a CPU bench, the mobo played a key role in this particular test.
October 1, 2002 5:09:37 PM

Maybe this link can help in where the increase come from:

http://www.anandtech.com/chipsets/showdoc.html?i=1719&p...

(sorry but I can't make it clickable)

Have you see a cross comparison with mobos and CPU to see the benefit from each element? Like:

a) KT333 mobo and XP2400+ overclocked to XP2800+ (using 166FSB)
b) nforce2 and XP2400+ overclocked to XP2800+ (using 166FSB)
c) KT333 mobo and XP2800+
d) nforce2 and XP2800+


DIY: read, buy, test, learn, reward yourself!
October 1, 2002 5:14:29 PM

When I saw your name and thread title AXP review, I thought it is another Tom's Hardware Bribe review

<b><font color=red> Long live piracy! </font color=red></b>
Anonymous
a b à CPUs
October 1, 2002 5:38:12 PM

For some weird reason Anand tested the nForce2 in its 64 bit incarnation. This explains his much worse results compared to Toms and Aces findings. Im very disappointed in Anand. Now he is the one showing 5 or so pages of SysMark2002. Even if you're not an AMD fanboy, its hard to believe a series of "general purpose" benchmarks where a 2 Ghz P4 outperforms the 2800+.. combine that with a "castrated" 64bit platform and one starts to wonder. Maybe I should make a satire of Anand one of these days..

= The views stated herein are my personal views, and not necessarily the views of my wife. =
Anonymous
a b à CPUs
October 1, 2002 5:53:54 PM

>Does anyone know if any other site has verified or
>disproven this problem that THG mentions ever so
>casually? If so, that's a huge concern in my opinion.

Relax.. here is what I read over at hardocp:

"Just as a note, we explained that we were having some BIOS issues with the 2800+ on some of our test boards. The 2800+ sample that we have in-house for testing is just that, an engineering sample and does not have an operational internal diode for registering the core temperature. Many new boards will not boot with this feature not working. AMD assures us that production 2800+ CPUs will have the internal diode operational."


= The views stated herein are my personal views, and not necessarily the views of my wife. =
October 1, 2002 6:14:24 PM

What bothers me in this article is the fact that we're comparing a FUTURE release mb with a FUTURE release cpu to a current/current Intel configuration.

I'm not taking any sides here, but also consider whether the future Athlons (2700 and 2800) will have real room for overclocking with their significant operating temps and 166FSB.

I just feel in the manner of fairness the article should have also simulated an upcoming Intel power combo




<font color=purple><i>Smokey McPot - Your Baby's Daddy</i></font color=purple>
October 1, 2002 6:31:03 PM

Quote:
Relax.. here is what I read over at hardocp:

Cool. I was just about to check there. Now I don't have to. :)  And I hope they're right.

<pre><A HREF="http://www.nuklearpower.com/comic/186.htm" target="_new"><font color=red>It's all relative...</font color=red></A></pre><p>
October 1, 2002 6:32:39 PM

Just click: <A HREF="http://www.tomshardware.com/cpu/02q3/020909/index.html" target="_new">http://www.tomshardware.com/cpu/02q3/020909/index.html&...;/A> .
There is the article about the future of the P4.

BTW, I agree this is quite an unfair comparison of a CPU to be released in three months to a CPU available for some days/weeks now. In the P4-article shown above, they do use overclocked AMD's to compare ...

Anand might be screwing up with the 64-bit mode, but Tom's is not that nice either ...

Greetz,
Bikeman

<i>Then again, that's just my opinion</i>
October 1, 2002 7:02:28 PM

Im just hoping that THG is wrong about the 2800 not showing up until 2003. I hope I misunderstood that. A paper launch is one thing, but for 3 months, that is rediculous.

<font color=blue>Unofficial Forum Cop</font color=blue>
Anonymous
a b à CPUs
October 1, 2002 7:35:48 PM

I can only try to imagine the size of the cow the AMDheads here would have if THG tried publishing simulated, or interpolated results.

Not saying it wouldn't be entertaining though.
October 1, 2002 7:47:55 PM

As THG said, I hope these new processors put some new competition in the CPU market. The competition is what drives prices down and performance up.

In a world without <font color=red>walls </font color=red>or <font color=green>fences </font color=green>, what use have we for <font color=red>Windows </font color=red>or <font color=green>Gates.</font color=green>
October 1, 2002 7:51:12 PM

I have to say, that finally, AMD has caught up to performance of the P4. But, the 2.8 P4 is readily available, while the new XP's really won't be out till the christmas season. By then, we'll have the 3.06 HT enabled P4, which will no doubt be performance king until the clawhammers arrive.

The article, though, is ridiculous. The writer(s) clearly state(s) that the Athlon XP2800+ is the new performance king. That's rubbish, if you look at ALL the benchmarks, the XP2800+ appears to be about equal in terms of performance with the 2.8 P4. Yet again, this shows how some of the THG writers are biased towards AMD. If you look at Anand's article, he says that the XP 2800+ has caught up in terms of performance, but <b>has not</b> regained the performance crown. As always, Anand has done another nice unbiased article.

- - -
<font color=green>All good things must come to an end … so they can be replaced by better things! :wink: </font color=green>
October 1, 2002 7:53:19 PM

Quote:
Go read Aces'.. they compare to a KT333 -for those tests where they could make their nforce2 work at all.. it kinda confirms your suspicion.

It was a great read, even if their benchmarking of the nForce2 was rather sporadic. Thanks for the heads up.

And it did show that without the nForce2, the AXP 2800+ would have looked pretty bad in benchmarks. I swear, that nForce2 gives the equivalent of an extra 200+ to 300+ in performance. Sometimes even more! I hope that the retail versions of nForce2 mobos perform just as well. (And get a LOT more stable.)

It definately shows that VIA is hindering AMD's performance badly. Of course, I'm not sure if AMD's chipsets do any better... AMD <i>really</i> needs to put more into their chipsets. It's a shame to see the CPU's potential being practically wasted. Well, hopefully nVidia will help that out. :) 

Quote:
Yep, though Im not too worried.. my Swiftech handles a 1400 Tbird with ease using a *very* slow Pabst fan. A few watts more should not be too much of a problem. How does this compare to a 2.8 Ghz P4 anyway ? Or a 3+ Ghz within a few months ? I guess 80+W is just something will have to get used to in the near future, unless SOI or .9 do some miracles.

It isn't just the 80+W though to worry about. Surface area is a very important factor as well. In this respect, Intel has a considerable advantage on heat sink requirements.

The AXP2800+ puts out 74.3W of heat and the die is 84mm2. The P4b 2.8GHz puts out 68.4W of heat and the die is 146mm2. That means (if I did my math right) that the P4b 2.8GHz puts out almost <i>10% less</i> heat, but also has almost <i>75% more</i> surface area.

So basically, the AXP 2800+ has to get rid of more heat with a <i>lot</i> less surface area to distribute it across. This means that the AXP needs a <i>much</i> better transfer of heat to the heat sink. At this rate we'll see lapped silver baseplates. Heh heh.

What AMD really needs to do is significantly increase their die size. Yeah, it reduces how many dies you can get per wafer which will raise the cost of their CPUs and decrease production. However, if they don't, then we'll be needing cooling solutions that cost as much as (or more than) the CPU itself just to use their CPUs. Either way, the cost will be there. People laugh at how large the P4 is in order to compete with an Athlon, but as the CPUs put out more and more heat, I'm starting to think that Intel had the right idea. A larger die <i>is</i> better.

Quote:
Even if the ondie sensor would not work , a motherboard could still protect from a "cooler defect", that is a fan failure. Most motherboards use/have their own thermal sensor anyway, and should be perfectly capable of protecting the cpu from a fan failure. Booting the system without a HS is another thing, but thats old news, not gonna restart that discussion.

I'd like to fully agree with you here. In the past it'd have been a pretty logical point and I <i>would</i> agree with you.

However, with as small as the die size has gotten and yet they keep producing yet more heat, I think it will reach a point soon (if it isn't already here) where even the motherboard's thermal sensor just won't be fast enough to save the die from burnout should the fan fail or the heat sink be mounted even slightly funny. Obviously some heat sinks will give more time to respond than others, but I think the AMD retail heatsink should be the balancing point used for this kind of measurement. (Since that's what any user who wants a CPU warantee will be using.)

I'm just hoping that the die's thermal diode will be fixed in the production batches. They have plenty of time to do it.

<pre><A HREF="http://www.nuklearpower.com/comic/186.htm" target="_new"><font color=red>It's all relative...</font color=red></A></pre><p>
October 1, 2002 7:57:22 PM

That's what I've been saying all this time. AMD needs to stop their crazy obsession with die sizes, and to start making their CPU's more heat efficient. Also, don't forget, the P4's generally run at a clock speed that's several hundred MHz faster than the Athlon XP's, yet output less heat.

EDIT: That's why AMD required SOI for their Hammer CPU's, so they would have any problems with heat, power consumption, and ramping. According to AMD, the Clawhammer will put out 70W of heat <b>with SOI</b>. That means, if CH didn't have SOI, it would be as high as 100W. Imagine if the 2.8 P4 had SOI, then it would output only like 50W of heat.

- - -
<font color=green>All good things must come to an end … so they can be replaced by better things! :wink: </font color=green>
<P ID="edit"><FONT SIZE=-1><EM>Edited by Dark_Archonis on 10/01/02 04:01 PM.</EM></FONT></P>
October 1, 2002 8:12:08 PM

Quote:
I have to say, that finally, AMD has caught up to performance of the P4. But, the 2.8 P4 is readily available, while the new XP's really won't be out till the christmas season. By then, we'll have the 3.06 HT enabled P4, which will no doubt be performance king until the clawhammers arrive.

Let's not forget though the 'possibility' that AMD may just paper-launch the AXP 2800+ just before Xmas and no one will be able to buy them until mid-to-late January. So Intel may even be a speed step or two beyond the 3.06GHz HT-enabled P4b by then.

Quote:
The article, though, is ridiculous. The writer(s) clearly state(s) that the Athlon XP2800+ is the new performance king. That's rubbish, if you look at ALL the benchmarks, the XP2800+ appears to be about equal in terms of performance with the 2.8 P4. Yet again, this shows how some of the THG writers are biased towards AMD. If you look at Anand's article, he says that the XP 2800+ has caught up in terms of performance, but <b>has not</b> regained the performance crown. As always, Anand has done another nice unbiased article.

I don't know. On one hand I think that THG made the right call in naming the AXP2800+ faster than the P4b 2.8GHz. It's a marginal win at best, but I saw the AXP2800+ win enough that I personally consider it better than the P4b 2.8GHz.

<i>However</i>, you're right in that it certainly does <i>not</i> mean the P4b has retaken the performance crown. I mean the P4b 2.8GHz<i>is</i> available today, but the AXP 2800+ won't be available for months.

On top of that, I'm thoroughly convinced that without the nForce2 and really high quality DDR, there's no way in hell the AXP 2800+ would have beaten the P4b 2.8GHz. I really see it as in terms of CPU alone, AMD lost that round. Their processor rating system just doesn't compare well to a P4 anymore.

Yet, thanks to nVidia, AMD as a complete system has improved considerably in a short period of time.

So really, I guess my conclusion after reading a few articles on this 'release' is that nVidia is pulling AMD's balls out of the fire. Should Intel ever upgrade their FSB, RAM, and/or motherboard to reduce latency even more (maybe with DDRII @ 533MHz?) then AMD is going to lose their competetive edge again.

And, anyone with a KT333 or KT400 motherboard using the new AXP 2800+ won't come close to competing with a P4b 2.8GHz on an 850E with PC1066. (Or anything better, should something better come along for Xmas.)

<pre><A HREF="http://www.nuklearpower.com/comic/186.htm" target="_new"><font color=red>It's all relative...</font color=red></A></pre><p>
October 1, 2002 8:27:47 PM

Quote:
That's what I've been saying all this time. AMD needs to stop their crazy obsession with die sizes, and to start making their CPU's more heat efficient. Also, don't forget, the P4's generally run at a clock speed that's several hundred MHz faster than the Athlon XP's, yet output less heat.

Yeah. I've read AMD folks talking about how Intel is going to reach speed limits because the clock signals will reach a point of where the next speed ramp isn't physicly possible. Yet AMD is going down the path of making their next speed ramp thermally impossible. Heh heh. Hopefully at some point one company or the other will learn to just increase the die size again to put in more components (thusly increasing IPC so that clock speed can jump down again) and improving the thermics so that we won't need water cooling just to even run our PCs.

Quote:
EDIT: That's why AMD required SOI for their Hammer CPU's, so they would have any problems with heat, power consumption, and ramping. According to AMD, the Clawhammer will put out 70W of heat with SOI. That means, if CH didn't have SOI, it would be as high as 100W. Imagine if the 2.8 P4 had SOI, then it would output only like 50W of heat.

I still think that Intel should work on furthering the performance and availability of their Ultra Low Voltage Celeron. At 650MHz it only takes 1.1V and outputs 7W of heat. Now, I know that's not much performance these days, but if Intel were to make a die four-times as large, I'm sure that they could make one hell of a powerful yet not power hungry CPU. Huge and expensive, but easy to power and easier to cool. I think it'd go over well enough in rack servers, and who know what kind of desktop innovations we might see trickle down from that.

<pre><A HREF="http://www.nuklearpower.com/comic/186.htm" target="_new"><font color=red>It's all relative...</font color=red></A></pre><p>
October 1, 2002 8:30:17 PM

I agree that calling 2800+ the performance king is a load of 'BS' ;) 

If you look at all the benchmarks and average it out, P4 2.8 and AXP 2800+ are each ahead slightly about 50% of the time. Almost dead even. <b>And</b>, this is a <b>future</b> AMD CPU with a <b>future</b> chipset, compared to a <b>current</b> Intel chip!

The P4 2.06GHz w/HT is supposed to be out in november. Thats a good solid two months before the AXP 2800+... meaning by the time the 2800+ comes out, it won't be the fastest, it'll be behind.


This is the 2200+ all over again. When the 2200+ was previewed, I forget what Intel had out at the time, but it was slower. AMD fanboys were all excited that the 2200+ was going to be a killer. By the time 2200+ got out the door, intel was up to well past 2.2 and had the performance lead.



If you want a fair comparison for AXP2800+, a <b>future</b> product, use a <b>future intel product</b>! I'd like to see a P4 3.6 w/ i850e compared to an AXP2200+, because as far as I know they will come out around the same time. Then which CPU would do better?

I believe the article "<A HREF="http://www.tomshardware.com/cpu/02q3/020909/index.html" target="_new">Hot Contraband: P4 at 3.6GHz</A>" answers this question. P4 3.6 utterly destroys even an AXP3400+ (2666MHz @ 166MHz FSB).



Oh, all that, plus personally I'm still quite dissatisfied with the heat output of AXP. Its even worse now then before, and thanks to low heat output My P4 1.8A runs as a P4 2.56A! I'd like to see how overclockable an AXP2800+ is.

-Col.Kiwi
October 1, 2002 8:42:31 PM

"Yet, thanks to nVidia, AMD as a complete system has improved considerably in a short period of time."

Totally agree with your response, it was right on the money. nVidia, THANK YOU!

Meantime like you said, PC1066/P4 won't be taken down anytime soon.

I don't know if it's affecting others the same as me, but lately I find myself almost disinterested in actually purchasing upgrades (but of course still like to read about the progress made). I have an XP2000, an Asus A7V333, and an AIW 7500 and while I have some money to blow, I really haven't seen anything worth ordering (with the exception of the ATI 9700, but even my AIW handles UT2003 Retail like a champ). I think this nVidia board will change that. Finally, a possible upgrade that could be worthwhile!



<font color=purple><i>Smokey McPot - Your Baby's Daddy</i></font color=purple>
October 1, 2002 10:01:27 PM

The Athlon 2800 itself deserves the performance corwn. The benchmarks that the P4 2.8 wins are mostly, if not completely, due to the 533 FSB. Look at the less intensive memory bench's and AMD has won it, no contest.

If you are an AMD fanbot, be happy. AMD is giving Intel a good run. Intel fanboys, be happy. The CPU's get cheaper and cheaper! =)
October 1, 2002 11:59:03 PM

not to sound too much like a chip trooper, but the p4 only beat the athlon considerably in the memory bandwidth tests and synthetic benches. it barely won in a few other tests. i was very supprised to see the score the 2800 got with lame encoding, the winner in that bench is usually the clock speed king. i dont know much about the nforce 2 but benches on the first nforce didnt show too much of an increase compared to other chipsets. the bench on the nforce 1 on toms wasnt too clear about a couple things. like the dual memory chanels, you only see that if you use two sticks of ram, they didnt mention any of that. i dont think the nforce 2 helped all that much, the first one was beaten by the other giants last time around.

how do you shoot the devil in the back? what happens if you miss? -verbal
October 1, 2002 11:59:04 PM

while the debate may rage over who is the performance leader somethings worth mentioning.

1) Slower clocked t-breds on the nforce 2...will we see a 166 FSB for them as well? Will it be easy to overclock them to 166 FSB with new nforce2 boards? This would significantly change things for those looking to purchase the best system for their money when on a budget as most people do not buy the absolute top of the line CPU.

2) Vindication for those that have contended that it was VIA holding back AMD. Absolute shame that VIA's new chipset is actually slower than the one previous....do they (VIA) take us for fools??????

3) Intels biggest mistake may very well be not granting Nvidia a p4 license. Rumor has it that Nvidia is absolutly thrilled with Hammer. Would Nvidia have as many resources availble to them to develope AMD chipsets if they were working on Intel chipsets as well?

4) I am inclined to beleive that an Nvidia/AMD merger is more than just a bit likely...the question is when?

This being said ALL paper launches suck. However, the look into the nforce2 does show promise and I beleive these will be availble soon before the 2700+/2800+. And remember kiddies AMD's license for SSE2 I believe begins Jan 1 2003. Is it possible that maybe we will see a 166 FSB barton with SSE2 and 512 L2? Could this in anypart be part of the reason for the delay? Doubtful but still possible.

It's not what they tell you, its what they don't tell you!
October 2, 2002 12:07:26 AM

"3) Intels biggest mistake may very well be not granting Nvidia a p4 license."

Intel have no need of a dual DDR chipset from Nvidia, they have their own comming out soon and I bet it will make much better use of dual DDR than Nvidia. It could even be better than PC1066 Rambus. We'll see soon I hope, rocky beach better be good now I've said that.
October 2, 2002 12:12:24 AM

You have seemed to miss the point. Whether or not they could produce a chipset faster than Intel's is moot and is another debate entirely.

It's not what they tell you, its what they don't tell you!
October 2, 2002 1:46:34 AM

So how much of the performance is due to the jump to 166fsb, and how much is due to nforce2 specific optimisations and dual channel DDR?

damn paper launches though. amd deserves an ass whipping for that.
here i am with my epox 8k3a+ which is just begging for a 166fsb processor (barton by prefrence)

i think its good that toms have finally trimmed down their processor review charts though. they were just get too damn big and complex, especially with those strange exotic overclocked beasties in it.

<b>I'm Toms Hardware Guide Official Forum Strumpet! :cool: </b>
October 2, 2002 3:25:39 PM

Quote:
So how much of the performance is due to the jump to 166fsb, and how much is due to nforce2 specific optimisations and dual channel DDR?

Well, the KT333 and KT400 support a 166MHz FSB. So you can get their performance running the AXP 2800+ instead of an nForce2. Then compare that to the AXP 2800+ running on the nForce2. Ace's Hardware did a good <A HREF="http://www.aceshardware.com/read.jsp?id=50000304" target="_new">review</A> that tried to cover this very topic as well as give other insightful tidbits of information.

When you see the major performance gap between the KT333 and the nForce2 at Ace's Hardware, you begin to see that <b>most</b> of the performance gain is not from the 166MHz FSB. In fact, I'm not even sure that I'd attribute most of that performance gain to dual-channel DDR. It is infact from the optimizations (or more accurately, correct usage of the technology) in the nForce2's memory controller.

After that, DC-DDR and a 166MHz were just the icing on the cake. :) 

However, Ace's Hardware also indicates that their nForce2 motherboard was <i>very</i> unstable. This instability could be very likely to be from such aggressive memory usage. If the nForce2 ends up having to handle the memory more gently (much like how the KT333 and KT400 do so) then the retail nForce2 boards may show no significant performance gain over a KT333 board. If that did indeed happen, then the AXP 2800+ would have no chance of being considered an equal in performance to the 2.8GHz P4b with PC1066, and Intel would again most definately be the performance champion.

So hopefully however nVidia fixes the nForce2 to be more stable for OEM and retail sale, it doesn't involve softening up their memory timings, because with softer memory timings, the DC-DDR and 166MHz FSB just won't be enough to let AMD keep it's THG-granted crown.

<pre><A HREF="http://www.nuklearpower.com/comic/186.htm" target="_new"><font color=red>It's all relative...</font color=red></A></pre><p>
October 2, 2002 3:43:13 PM

Quote:
I don't know if it's affecting others the same as me, but lately I find myself almost disinterested in actually purchasing upgrades (but of course still like to read about the progress made).

I'm the opposite. I'm running on a Celeron 500 with 256MB of PC66 and a Savage4 PCI video card. I'm a software engineer looking to develop DX9 apps. I <i>need</i> an upgrade, bad.

Unfortunately, with the recent purchase of a new house and subsequent repairs on top of emergency dental work and car repairs, my 'new computer' budget is sorely drained. I think it currently sits at about $150 now. :( 

So I'm looking to Q1 for a new PC to replace poor old Cel. So I'm very intent on uncovering all of the dirt I can on PC-type stuffus lately because the usefulness of my next rig will depend heavily upon it. It should be one hell of an upgrade. ;) 

But I have my nagging suspicions that the retail nForce2 won't be nearly as good as the engineering samples being reviewed currently because they might need to pad the memory timings to make it more stable, and that could utterly kill the whole nForce2 advantage over the KT333.

<pre><A HREF="http://www.nuklearpower.com/comic/186.htm" target="_new"><font color=red>It's all relative...</font color=red></A></pre><p>
October 2, 2002 4:58:25 PM

Quote:
1) Slower clocked t-breds on the nforce 2...will we see a 166 FSB for them as well? Will it be easy to overclock them to 166 FSB with new nforce2 boards? This would significantly change things for those looking to purchase the best system for their money when on a budget as most people do not buy the absolute top of the line CPU.

I doubt that AMD will sell 166MHz FSB versions of their older chips. They're having enough problems with production as it is. Heh heh.

As for easy to OC them to a 166MHz FSB, it's no easier with an nForce2 board than with a KT333 board. KT333 supports a 166MHz FSB. So does KT400. Unofficially, so does a KT266a if you use DDR333 RAM. I believe that for the most part it's a matter of unlocking the AXPs because they don't have enough headroom, so you have to lower their multiplier to OC their FSB. A motherboard won't make that any easier or harder. And we all now how complicated it is to unlock an AXP...

Though the slow T-BredBs might hit a 166MHz FSB without an unlock...

Quote:
2) Vindication for those that have contended that it was VIA holding back AMD. Absolute shame that VIA's new chipset is actually slower than the one previous....do they (VIA) take us for fools??????

Has an ALI, SIS, or even AMD mobo done any better than a KT333? I don't think it's VIA that's holding AMD back. I think it's DDR SDRAM that has serious problems when used at high speeds. That aside, VIA still is crap in my book. ;) 

Quote:
3) Intels biggest mistake may very well be not granting Nvidia a p4 license. Rumor has it that Nvidia is absolutly thrilled with Hammer. Would Nvidia have as many resources availble to them to develope AMD chipsets if they were working on Intel chipsets as well?

nVidia <i>did</i> work on an Intel chipset: the XBox's. If it weren't for the XBox, nVidia may <i>never</i> have gotten into chipset development, and AMD would now be screwed. Heh heh.

That aside, nVidia's motherboard prices are just plain too high. They're worth it, but because most people (and OEMs) are looking for cheap, cheap cheap, nVidia gets passed over for a motherboard. Besides, their dual-channel has been quirky to use in most people's opinions. (According to what I've read and heard.)

Now, nVidia may have had a larger market for Intel DDR boards than AMD DDR boards had Intel ever allowed them to do so because people are just willing to pay more for an Intel system. So this did hurt nVidia, in my opinion.

Yet Intel certainly had the 850 and 850E for people who wanted superior RAM performance, and they still offered the misc. 845s for those who wanted DDR. So I don't think Intel has suffered much (if any) from their treatment of nVidia.

And as for the future, well, Intel has been silently bragging (if that makes any sense) about their dual-channel DDR plans for a while now. Sooner or later we're bound to finally see these products for sale. So by the time nForce2 catches on, Intel may very well have their own DC-DDR solution for the P4 that works just as well. If I were Intel, I'd be holding off on DC-DDR anyway and waiting for DDRII, since DDR seems to be rather unstable at high speeds and DC-DDR will probably only make that worse, or at the least suffer because of it.

So I think Intel was more or less unaffected by their treatment of nVidia, where as nVidia was slightly more hurt by it. Given that, it would be no surprise at all for nVidia to favor AMD just because of hard feelings.

Quote:
4) I am inclined to beleive that an Nvidia/AMD merger is more than just a bit likely...the question is when?

I think when hell freezes over would be when. nVidia does quite well for themselves as they concentrate on their niche. I don't think they want to drag themselves down by merging with AMD. Besides, at some point it might be a conflict of interest for nVidia to do so if they ever have hopes for motherboards for Intel CPUs again.

AMD on the other hand could use the merger badly. They're in desperate need of someone who can show them how to market as well as someone who can loan them enough cash and give them enough time to put up a new FAB and ramp up production to meet the new demand that the better marketting would create.

Quote:
Is it possible that maybe we will see a 166 FSB barton with SSE2 and 512 L2? Could this in anypart be part of the reason for the delay? Doubtful but still possible.

Doubtful on the SSE2 part. That'd require a core change that I don't think AMD really wants to put the resources into doing because they're so worried about Hammer at the moment.

I think Barton's timing (as well as Hammer's) is due primarily to SOI problems and T-Bred core problems. (Which is why SOI for Barton has wavered back and forth so much.) The T-Bred core problems are fixed ... well ... helped anyway. SOI should help fix the heat problem. So it all comes down to when AMD can finally get SOI working right. There's close to no way that Barton could even work without SOI now, not with that kind of heat in such a small core.

<pre><A HREF="http://www.nuklearpower.com/comic/186.htm" target="_new"><font color=red>It's all relative...</font color=red></A></pre><p>
October 2, 2002 5:00:09 PM

You're right. I said that AXP2800 won in about 66% of the tests and its actually more like 75% as I check again.

However, my point about AXP2800 release not until january and P4 2.8 being current is still, IMO, valid.

...as well as the factor of P4 3.06HT

^^

-Col.Kiwi
October 2, 2002 7:18:29 PM

<quote>The Athlon 2800 itself deserves the performance corwn. The benchmarks that the P4 2.8 wins are mostly, if not completely, due to the 533 FSB. Look at the less intensive memory bench's and AMD has won it, no contest.
</quote>

Dude what's your point. If you look at the result closely the benchmarks Athlon 2800 Mostly wins in is the one with Nforce2 (which is not available yet).

KG

"Artificial intelligence is no match for natural stupidity." - Sarah Chambers
October 2, 2002 7:41:22 PM

Quote:
Dude what's your point. If you look at the result closely the benchmarks Athlon 2800 Mostly wins in is the one with Nforce2 (which is not available yet).

Not that the AXP 2800+ is available yet either. ;) 

<pre><A HREF="http://www.nuklearpower.com/comic/186.htm" target="_new"><font color=red>It's all relative...</font color=red></A></pre><p>
October 2, 2002 8:12:26 PM

[sarcasm]How can it be the performance king when Intels 3.6 is coming out in 2003! No wait, the P5 4ghz is coming out some day soon too! That's the real performance king![/sarcasm]

Actually, Intel has another processor release BEFORE the 2800+ scheduled release. You're gonna see 3ghz P4's in systems the first week of november. Plus these 3ghz chips will be HT enabled. Now w/ AMD's current track record, I'd say OEM's and vendors get these 2800+'s out the door mid to late february the earliest.
AMD seems to be fighting this Marketing war w/ paper, and Intel is fighting w/ chips.

[-peep-].
October 2, 2002 10:17:13 PM

You couldn't possibly have said it better. 3.06HT is going to own 2800+... not to mention the months-earlier release.

"...AMD seems to be fighting this Marketing war w/ paper, and Intel is fighting w/ chips." Is something I plan on quoting.

-Col.Kiwi
October 3, 2002 1:34:43 AM

Quote:
There's close to no way that Barton could even work without SOI now, not with that kind of heat in such a small core.

You're forgetting the added cache will push Barton to Palomino sizes. Now if I read right, increasing die size reduces W usage? And that a size of say, 160mm, would be nearly or at least 50% more cooler than a current Tbred at 75W?
If that's so, the Barton SHOULD be able to run without SOI. Again an IHS would severly help get the cooling everywhere, though Hammer will use it anyway. Yes AMD has to stop the small die size obsession. It was good before, but now it's over the hill.

--
What made you choose your THG Community username/nickname? <A HREF="http://forumz.tomshardware.com/community/modules.php?na..." target="_new">Tell here!</A>
October 3, 2002 6:02:51 AM

Quote:
ACE's also claims the prefetcher needs improvement, I totally agree.


I agree, too. Barton needs better prefetcher. I wonder why AMD is not thinking about 400 MHz FSB. Athlon's EV6 bus can do it. nForce2 and KT400 will provide adeqaute memory bandwidth for 400 MHz FSB.

<b><font color=red> Long live piracy! </font color=red></b>
a b à CPUs
October 3, 2002 11:24:59 AM

Yws the added cache will increase die size. But it will also increase the amount of heat generated,

My P3 1.26/512 runs about 7 degrees celsius hotter than my P3 1.20/256 on the same motherboard.Same heatsink,etc.

So the added cache will increase heat production by a significant amount.

I aint signing nothing!!!
October 3, 2002 11:45:31 AM

Yes but the former (P3) is 0.18m and the latter (Tualatin)is 0.13m.
Intel may have not well arranged the Tually core, or perhaps even shrunk it further than the 0.18m version.

--
What made you choose your THG Community username/nickname? <A HREF="http://forumz.tomshardware.com/community/modules.php?na..." target="_new">Tell here!</A><P ID="edit"><FONT SIZE=-1><EM>Edited by Eden on 10/03/02 07:45 AM.</EM></FONT></P>
October 3, 2002 12:54:27 PM

Quote:
Yes but the former (P3) is 0.18m and the latter (Tualatin)is 0.13m.

Both the 1.26/512 and the 1.20/256 are Tualatin cores.

Ritesh
October 3, 2002 6:18:26 PM

strayin' from the subject. Same ol' IT world. Everybody's gotta tell everybody how inaccurate they are.


I eat Strumpets for breakfast!
October 3, 2002 8:37:11 PM

Quote:
You're forgetting the added cache will push Barton to Palomino sizes.

Err ... no I'm not. But besides that, it won't be <i>that</i> much of a size change, so it's still not a significant improvement in surface area.

Quote:
Now if I read right, increasing die size reduces W usage?

I don't think you've read right. ;)  Increasing die size because you're putting in yet more frequently-used transistors just increases W output. The size improvement allows the die to trasnfer more heat to the heat sink, but Barton will also generate more heat. (That is, generate more heat unless something like SOI is used.)

If AMD is lucky, the two will cancel each other out. In reality though, with as frequently as cache is accessed, it'll be a lot more heat generated than the extra amount of heat transfered to the heat sink.

Quote:
If that's so, the Barton SHOULD be able to run without SOI.

Unfortunately, it's not so. Barton will be generating even more heat, and only be slightly bigger. So the problem will just get worse unless AMD either scales down Barton's speed, or AMD reduces the power needed for Barton (through SOI or <i>anything</i> else).

Quote:
Again an IHS would severly help get the cooling everywhere, though Hammer will use it anyway.

That's still debatable. An IHS firstly just protects the core. The size of the core doesn't change. So the surface area that the core has to distribute heat across doesn't change. However, now the IHS is an extra layer between the core and the heat sink. That can actually <i>reduce</i> thermal conductivity.

I think the only time that an IHS makes a cooling difference is when the IHS is copper (or some other good heat conductor) and the heat sink base itself isn't (such as is aluminum). Otherwise, the IHS is just a protective layer that at best doesn't affect the heat transfer significantly.

Basically, an IHS is just a well-bonded miniature shim. It protects the core, it helps with cheap-arsed heat sinks, but it doens't do squat when you have a good heat sink.

Quote:
Yes AMD has to stop the small die size obsession. It was good before, but now it's over the hill.

I don't know if it was ever really good before. AMD has been having really hot CPUs a couple of times now. Only the die improvements have allowed them to back off from the absolute barrier of what temperature the die could take.

Only as the die keeps shrinking, even at the same W of heat output as before, now you've got less surface area to conduct it over. So ever since the end of the T-Bird, we've just needed better and better heat sinks to accomodate AMD's repeated die size reductions.

Sure, it saved AMD money to produce the chip and improved their production capacities. However, it also made running a rock-stable AMD machine harder and harder as we kept having to get better heat sinks. (Especially to OC in any way, shape, or form.)

<pre><A HREF="http://www.nuklearpower.com/comic/186.htm" target="_new"><font color=red>It's all relative...</font color=red></A></pre><p>
October 6, 2002 7:46:59 PM

I agree, the reason the the 2800+ performed the way it did was because of the aggressive memory timings on the nForce, as well as DDR333. The FSB only increased the performance by like 5%.

Quote:
<i>Written by slvr_pheonix</i>
I still think that Intel should work on furthering the performance and availability of their Ultra Low Voltage Celeron. At 650MHz it only takes 1.1V and outputs 7W of heat. Now, I know that's not much performance these days, but if Intel were to make a die four-times as large, I'm sure that they could make one hell of a powerful yet not power hungry CPU. Huge and expensive, but easy to power and easier to cool. I think it'd go over well enough in rack servers, and who know what kind of desktop innovations we might see trickle down from that.

Don't worry, Banias will replace the Celeron. Remember, at 1.6Ghz, it only outputs 7W of heat at full power, yet performs clock per clock with an Athlon XP.

Eden, AMD isn't releasing the XP's not only because of marketing, but also BECAUSE THEY CAN'T. Dresden right now is reserved for Hammers, so UMC must manufacture the XP's. And AMD IS beating a dead horse. Why, because it's no longer useful to improve the K7 core, because as you can tell by the XP2800+, improving the core hardly yields any performance any more. The core is simply obsolete now. It can no longer compete with the P4.

Quote:
<i>Written by Eden</i>
Secondly, although I never put my opinion on this yet, I would add that I am indeed impressed. The fact there is STILL much life left in this core is simply amazing. Some people keep claiming AMD is beating a dead horse. But I say, if it can give more juice, SQUEEZE IT. Right now, the sole and most affective core component that is simply keeping AMD from raping the competition, is SSE2. On almost all SSE2 tests, including 3d S Max 5, the AXP is losing or not trouncing that much, while it could've RAPED if it had SSE2. Barton with such would reclaim the crown for some time, to allow Hammer to be ready.
ACE's also claims the prefetcher needs improvement, I totally agree. The P4 has roughly TWICE the performance in 64 byte strides, from their charts, in prefetching. If AMD improved it, it might help the FSB increase be more worth. In fact perhaps one of the reasons for the decreased IPC in scaling, could be the prefetching latencies becoming too big on the clock.

Did anybody notice how the small 33MHZ jump from the XP2600 to the XP2700, yeilded more than 10-15%? In my book this is POWER.
Sadly after that, the 83MHZ jumps seem weak compared to the previous 133MHZ jumps the XP2400-2600 had. Why'd they change that, I dunno, but it was better before, much more competitive.

Sorry to say, Eden, but you still carry a bias towards AMD always hoping that they will rape Intel. Personally, I do not want AMD to collapse, because it provides competition for Intel, which keeps the prices low. As much as people love AMD, though, there is still no excuse to be biased and to stretch the truth.

I'm sick and tired of hearing people whine about how the only reason the Athlon XP doesn't have the performance crown is because SSE2. Well, you don't see me whining about the fact that The FPU in the P4 is not good enough. You don't see me whining that The P4's low IPC is because of it's FPU. The P4 has SSE2. The Athlon has a better FPU. You can't change that. It just irritates me when people say "what if Athlon had this..." or "what if the Athlon had that...then it would rape the P4". Making statments like this simply shows fanboy qualities. Everyone who whines about SSE2 says the "it's not fair" that the P4 has it, and the Athlon doesn't. Ohh ya? Well, is it <b>fair</b> that the Athlon has a <b>superior FPU</b> compared to the P4? Is it <b>fair</b> that the Athlon has <b>128KB L1 cache</b>, compared to the <b>20KB L1 in the P4</b>? Even if these things are not "fair", deal with it. Both cores reach high performance using different ways. Both cores are dramatically different. They can't be <b>directly</b> compared, like P3 vs Athlon.

That "33mhz" jump from XP2600 to XP2700 <b>did not</b> increase performance by 10-15%. What increased the performance was the nForce 2, DDR 333, the increased FSB, <b>and</b> that 33mhz jump. All together, that's what increased the performance.

Quote:
<i>Written by Eden</i>
Yes AMD has to stop the small die size obsession. It was good before, but now it's over the hill.

<b>Exactly how</b> was it good before? AMD's CPU's <b>always</b> ran hot. AMD's CPU's have a reputation for running hot. Sure, it might have safed them money, as slvr said, but it's causing them alot of problems. You lose more money than you gain by using a very small die size. Intel was smart enough to know that. And AMD no longer has any advantage, because right now, it costs Intel <b>less</b> to make a CPU than AMD. That's because Intel is using 300mm wafers.




- - -
<font color=green>All good things must come to an end … so they can be replaced by better things! :wink: </font color=green>
October 6, 2002 11:34:59 PM

Quote:
improving the core hardly yields any performance any more.


If the P4 at 7GHZ, had some ramping problems yet Intel kept shrinking the core or perhaps adding IPC while AMD had a new K10 core with powerful ramping, you'd tell me Intel is beating a dead horse? No, simply that they need to arrange stuff for the future. NO core scales just like that. Intel is adding 0.09m technologies and is ensuring SOI, to continue further on. Who said it was designed at 10GHZ directly?
AMD is NOT beating a dead horse if they know what to expect. Yes they are shooting themselves for not properly handling the current K7s, but put in a skilled Intel fab technician and he'll show 'em how to squeeze more.
The only point some core has to change completly or be gone, is when the physics of its pipeline and electron flow are used up. So a 10 stage pipeline will at some point no more take any improvements. It's like a car plant with 5 stages only, and forces the machines or workers to work at 20 second intervals per item, that'd simply NEVER be possible. Is AMD there? Hell no, well at least under normal logical physics, no. Shrinks still work, it all depends on their way of handling it. At 4-5GHZ, I'd say it's as good as dead.

Quote:
Sorry to say, Eden, but you still carry a bias towards AMD always hoping that they will rape Intel.

I was stating truth, as you'd say. AMD made an impressive jump in performance, and that jump deserved some pointers. The NW's added cache was not all that impressive, so if people didn't comment that and hyped the performance increase, they're AMD biased?!

Quote:
I'm sick and tired of hearing people whine about how the only reason the Athlon XP doesn't have the performance crown is because SSE2. Well, you don't see me whining about the fact that The FPU in the P4 is not good enough. You don't see me whining that The P4's low IPC is because of it's FPU. The P4 has SSE2. The Athlon has a better FPU. You can't change that. It just irritates me when people say "what if Athlon had this..." or "what if the Athlon had that...then it would rape the P4". Making statments like this simply shows fanboy qualities. Everyone who whines about SSE2 says the "it's not fair" that the P4 has it, and the Athlon doesn't.

Dude, I bet you never seen the discussions of early this year and last one where people kept hoping the P4 FPU would be better. We'd often say "if P4 had a better FPU".
Please find me some way for Athlon to win SSE2 based apps then.
I whined for months with how P4 did not have a better FPU or IPC, a lot did.
Again I fail to see how the truth about what could be improved works, is fanboyism.

Quote:
Ohh ya? Well, is it fair that the Athlon has a superior FPU compared to the P4?

It is if Intel finds a way to counter it, which they did but not always had it work.
Quote:
Is it fair that the Athlon has 128KB L1 cache, compared to the 20KB L1 in the P4?

Because Intel's L1 cache is based on 2 types, and the most important being another way to use caching, the Trace Cache. If it helps P4s, then 128K L1 is not needed. It's not the Trace Cache that is going to be optimized mostly, if SSE2 is there for that task.

Quote:
Even if these things are not "fair", deal with it.

The most recalling aspect of that statement is the speed vs IPC war. How P4's IPC is weak but scales so much, how Athlon's IPC is high but merely scales. I've dealt with it because there are ways to bypass these limitations.
Quote:
Both cores reach high performance using different ways.

And who exactly stops one company from trying to use the other's ways to improve their own which is already competing? SSE2 is an OPEN TO ALL standard, contrary to Trace Cache which probably would need harder licensing, so stating that Athlons need SSE2 in SSE2 BASED APPS makes sense to Einstein and us. If you're sick of what is rightfully wished to happen, it's your own problem. If programmers suddenly let go of SSE2 and programmed for raw 3 FPU cores, people would whine P4 needs the better/more FPUs to rape Athlons, which then your irritation would be quite moot.
Quote:
They can't be directly compared, like P3 vs Athlon.

Once again SSE 2 is an OPEN STANDARD for CPU companies to add to make sure their CPUs can make use of what most multimedia programs support and take advantage of! So this is not about comparing cores man, you're on another page here.

Quote:
That "33mhz" jump from XP2600 to XP2700 did not increase performance by 10-15%. What increased the performance was the nForce 2, DDR 333, the increased FSB, and that 33mhz jump. All together, that's what increased the performance.

I'm not blind, I knew that, I simply indicated that the small 33MHZ jump plus all this was well worth the IPC increase, and it sounded almost better than the added cache on NWs. You don't see such increases anytime, so this was rather impressive for Athlon systems, while P4s have undergone the new FSB and cache. Quite a big improvement there.

Quote:
Exactly how was it good before?

It did help them a lot in yeilds, and since cores were not 80mm before, it is safe to say Athlons could still be well cooled, compared to now. Plus on a similar core size to P3, Athlons always generated more heat, and forgive my seemingly low intelligence Dark, if P3s had less IPC overall, gee that musta done nothing to the temps...


In the end you still beleive my words are pro-AMD while I was commenting on a SITUATION. There's a difference with commenting a situation than when using those words in regular anytime conversations. If I jumped at some user and kept touting the new XP2800+Nforce 2 combo being the all-mighty, and then said he won't get the best performance in SSE2 apps because of the lack of it, THEN feel free to slap me back to reality and call me fanboy....and sue me!

--
What made you choose your THG Community username/nickname? <A HREF="http://forumz.tomshardware.com/community/modules.php?na..." target="_new">Tell here!</A>
October 7, 2002 6:23:21 PM

Quote:
I'm sick and tired of hearing people whine about how the only reason the Athlon XP doesn't have the performance crown is because SSE2. Well, you don't see me whining about the fact that The FPU in the P4 is not good enough. You don't see me whining that The P4's low IPC is because of it's FPU. The P4 has SSE2. The Athlon has a better FPU. You can't change that. It just irritates me when people say "what if Athlon had this..." or "what if the Athlon had that...then it would rape the P4". Making statments like this simply shows fanboy qualities. Everyone who whines about SSE2 says the "it's not fair" that the P4 has it, and the Athlon doesn't.


It is FAIR, because-

1. Athlon can have SSE2
2. P4 can't have more IPC or more powerful FPU withlout losing scalability. Intel surely doesn't want it.

Quote:
They can't be directly compared, like P3 vs Athlon


Was P3 and Athlon directly compareable? K7 was not a P6 clone. K7 core was newer than P6 core and lot more scalable. If you want to compare P3 and Athlon directly, then you must compare Athlon and P4 directly.

<b><font color=red> Long live piracy! </font color=red></b>
!