Sign in with
Sign up | Sign in
Your question

"Core 2 Duo -- The Embarrassing Secrets"?

Last response: in CPUs
Share
April 25, 2007 8:36:08 PM

Well, Scientia might be around here lurking, but the article he posted on his blog is kind of interesting to say the least. I think its a good read and a good way to exercise our powers of deduction and reasoning. with that said, please reply thoughtfully to the thread and leave the bashing out. :) 

Quote:
Although Core 2 Duo has been impressive since its introduction last year, a veil of secrecy has remained in place which has prevented a true understanding of the chip's capabilities. This has been reminiscent of The Wizard Of Oz with analysts and enthusiasts insisting we ignore what's behind the curtain. However, we can now see that some of C2D's prowess is just as imaginary as the giant flaming wizard.

The two things that Intel would rather you not know about Core 2 Duo are that it has been tweaked for benchmarks rather than for real code, and that at 2.93 Ghz it is exceeding its thermal limits on the 65nm process. I'm sure both of these things will come as a surprise to many but the evidence is at Tom's Hardware Guide, Xbitlabs, and Anandtech. But, although the information is very clear, no one has previously called any attention to it. Core 2 Duo roughly doubles the SSE performance of K8, Core Duo, and P4D. This is no minor accomplishment and Intel deserves every bit of credit for this. For SSE intensive applications, C2D is a grand slam home run. However, the great majority of consumer applications are more dependent on integer performance than floating point performance and this is where the smoke and mirrors have been in full force. There is no doubt that Core 2 Duo is faster than K8 at the same clock. The problem has been in finding out how much faster. Estimates have ranged from 5% to 40% faster. Unfortunately, most of the hardware review sites have shown no desire to narrow this range.

read the rest:
http://scientiasblog.blogspot.com/
© Scientia from AMDZone, April 15 2007

Comments?

Edited to save my ass from a copyright violation charge.
April 25, 2007 8:40:50 PM

My eyes hurt. Har har fanboy man. Core 2 is of no significance. Intel is evil. Everyone is against AMD. AMD is not hurting. :lol: 
April 25, 2007 8:47:29 PM

definately sounds like there is an attempted slight against intel. I think that he needs to provide refferences and links to suitable data sources to back up his claim.

even testing here on THG (which is not always reliable) with real world applications, the C2D almost always comes out on top. While in some cases it's not a catostrophic increase, it is still an increase.

Real world benchmarks are the best test, and they still show that the C2D is a performer.
Related resources
April 25, 2007 8:56:02 PM

I have read a prelimary benchmark for Conroe-L.
From that result, I could claim that it should outperform similarly clocked K8. :wink:
April 25, 2007 8:56:07 PM

That dood is all over the place. First he talks about benchmarks, then rattles on about temperatures, and then finally about 'booms'. Is he actually trying to make a point? :roll:
a b à CPUs
April 25, 2007 8:59:28 PM

You want a comment on the vile piece of trash?

And I'm saying this from a centrist viewpoint. First of all how can a processor be made to give benchmarked applications a boost? If the processor gives the applications a boost under a test, testing the application then it will undoubtably give the application a boost when performing it's normal routine duties.

Synthetic benchmarks aside, Core 2 offers a compelling performance boost. It's noticeable even when just running Windows (especially VISTA) over AMD's Athlon64 X2 processor.

The entirety of this article with it's assumption that Intel's Core 2 is made to just look good under benchmarks seems like a conspiracy theory of the grand scale. If an apple, looks like an apple, tastes like an apple and it's DNA is that of an apple then by all accounts it is an apple. This nut is trying to tell us it's an Orange.

A few Facts that most readers understand. They are that...

1. Intel's Process technology is superior to it's competitors
2. Intel's Caching technology is also superior and plays a large role in the performance of C2D, particularly when running in Dual Core mode and having the Cache Shared.
3. Intel's SSE performance is second to none.
4. Intel's Integer performance is faster per clock then it's current competitor's AMD.

Now.. how to explain. Simple. Intel's Architectures have one flaw. They're based on an older communications bus known as the Front Side Bus. Particularly, one can see the deficiency of this older technology with the performance of Intel own Celeron style Core 2 processor the Conroe-L. It is both the cache that keeps this processor from performing as well as it's front side bus speed.

Let me explain for the Sharikou-style folks.

No doubt having 512KB of L2 cache does impact performance. But another great limiter is the 800MHz FSB. To explain this takes great patience but I will try. Core 2 Duo's have a 1066MHz Front Side Bus shared between both Cores. One would assume this means each core shares it in half. This is false. Core 2 Duo's cores make use of the shared Cache (a rather large pool at 4MB) to send and receive data (between both cores). This is used as a communications bus between both cores (The cache) therefore the entire C2D has ~ 1066MHz to play with (8.5GB/sec) as both cores do not need access to the FSB at the same time. You see the Core 2 Duo is a TRUE dual core processor. Whenever the C2D needs to communicate with the system it has a full 1066MHz bus to do so with as all the rest of the information is shared via tha cache. With Conroe-L you have a problem. For one, the FSB is decreased to 800MHz (6.4GB/s) and secondly it's Cache is striped down to 512KB (1/8th that of Core 2 Duo). Taking into account that the Cache is 1/8th and the performance drop is around 40% one can extrapolate that the Athlon64 (K8) architecture that suffers a 10% decline when the Cache is halved would suffer a 30% performance hit if the cache were 1/8th itself.

The remaining performance hit? Simple, K8 has hypertransport, Conroe-L has an 800MHz front side bus. Hypertransport is slower then the shared caching mechanism that C2D get's to use but faster then the simple FSB clocked at 800MHz that Conroe-L is stuck with.

I would assume a 30% reduction in performance of Conroe-L due to caching size and the remaining 5-10% being due to the smaller FSB.

The problem is that people are viewing C2D as Two cores on one package when in fact it's a single processor with two cores. a Dual Core Processor. Because the Cache is shared you can fully compare to a single core processor on a 1:1 basis and not 2:1.

Athlon64 X2 cannot. It is not a Dual Core Processor. It's a Dual Processor system one a single package that communicates with one another using a hypertransport link. No Cache is shared between both so they're in fact two seperate processors with an HT link between them.

April 25, 2007 9:00:44 PM

This is one of the dumbest articles I have ever wasted my time reading. Here are the earth shattering revealations:

1. If C2D didn't have a big cache it would be slower.

mmm, k.

2. If CPU's were benched in a way more favorful for AMD, AMD would do better.

mmm, k.

3. In "real world" conditions AMD is better.

hmmm, encoding time and fps while playing games are not real world??? What does this idiot do on his home computer? I would say these are the only two test that mean anything to me in terms of home computing and don't know how you could run them more "real world" than just running them.

4. Intel's quad at 2.93 exceedes what he feels is acceptable for thermal limits.

ok, nice cherry on top of your " I a f'ing moron AMD fanboy idiot " sunday.
April 25, 2007 9:04:53 PM

Quote:
The problem has been in finding out how much faster. Estimates have ranged from 5% to 40% faster. Unfortunately, most of the hardware review sites have shown no desire to narrow this range.

I haven't seen this, it's pretty clear from the huge number of reviews that a 2x1MB K8 needs roughly a 25% clock advantage to match a 4MB C2D over a wide range of applications. Hence the 6000+ barely matching the E6600.

Quote:
owever, the comparison between the 2.0Ghz Celeron 440 and the 1.8Ghz E4300 is not so good. With a 10% greater clock speed, the lower cache C2D is actually 36% slower.

It's an invalid comparison considering the E4300 is a dual-core processor and many of the Xbitlabs tested applications benefit from the second core. A more reasonable review here shows the 1MB Pentium E2160 barely slower than the 2MB E4300 and considerably faster than a 3600+, which is clocked faster and has the same amount of total L2 cache.

http://xtreview.com/addcomment-id-2106-view-Pentium-e21...

Quote:
According to the Guide, "Thermal Case Temperatures of 60c is hot, 55c is warm, and 50c is safe. Tcase Load should not exceed ~ 55c with TAT @ 100% Load." So, 55c is the max and since we are allowing 7c because of less than 100% thermal loading, the maximum allowable temperature would be 48c. The second chart, Loaded CPU Temperature lists the resulting temperatures. We note that the temperature of the X6800 at 2.93Ghz with the stock HSF (heatsink and fan) is shockingly 56c or 8c over maximum. We can see that even a Thermalright MST-6775 is inadequate. From these temperatures we can say that X6800 is not truly an X/EE/FX class chip. This is really a Special Edition chip since it requires something better than stock cooling just to run at its rated clock speed. This finally explains why Intel has not released anything faster. If the thermal limits can be exceeded with stock HSF at stock speeds then anything faster would be even riskier. Clearly, Intel is not willing to take that risk and repeat the 1.13Ghz PIII fiasco. This explains why Intel is waiting until it has a suitable 45nm Penryn to increase clocks again. Presumably with reduced power draw, Penryn could stay inside the factory thermal limits.

He's clearly mixing up temperatures here, using Tjunction temperatures while using the Tcase limits. 55C measured means there is upwards of 25C of headroom before throttling. This is supported by Intel being able to release quad-core models without problems, the low measured power usage of the C2Ds and the ease of which even overclocked C2Ds can be run passively with good heatsinks.
April 25, 2007 9:08:38 PM

Was interesting. Great outlook to find that.

One response comment I found interesting is by "abinstein":

"It is true that we see faster media encoding by C2D than K8X2. It is true that compressions run faster on C2D than on K8X2. It is true that games and 3D graphics apps run faster on C2D than K8X2.

"At the most, we can only say that C2D is tweaked for certain types of apps such as media processing/compressions and AI (path finding). OTOH, C2D runs slower than K8X2 for cryptography and many mathematical/scientific codes, and about the same for business applications. The point is, all these above are real codes, not benchmarks."
April 25, 2007 9:14:58 PM

Quote:

"It is true that we see faster media encoding by C2D than K8X2. It is true that compressions run faster on C2D than on K8X2. It is true that games and 3D graphics apps run faster on C2D than K8X2.

"At the most, we can only say that C2D is tweaked for certain types of apps such as media processing/compressions and AI (path finding). OTOH, C2D runs slower than K8X2 for cryptography and many mathematical/scientific codes, and about the same for business applications. The point is, all these above are real codes, not benchmarks."


Please, we can say that the C2D is faster in the overwhelming majority of applications, with the K8 being only stronger in a few niche or old applications.
April 25, 2007 9:15:01 PM

Quote:
It's an invalid comparison considering the E4300 is a dual-core processor and many of the Xbitlabs tested applications benefit from the second core.


+1

Quote:
"It is true that we see faster media encoding by C2D than K8X2. It is true that compressions run faster on C2D than on K8X2. It is true that games and 3D graphics apps run faster on C2D than K8X2.


So it is true that for the desktop, Core 2 Duo is better. :?:

Quote:
"OTOH, C2D runs slower than K8X2 for cryptography and many mathematical/scientific codes, and about the same for business applications. The point is, all these above are real codes, not benchmarks."


Oh yes, because we all run cryptography??, Sciencemark??, and "business" apps.

Conclusion, abinstein thinks real apps are "benchmarks" and "business" apps are "real code".
April 25, 2007 9:16:50 PM

I tried to find out exactly what temperature is being measured here as that is an important consideration as to how close to the thermal limit the cpu actually is; t-case or t-junction. The original article that Scentia refers to states that they are using the NVIDIA Monitor temperature measurement utility to obtain the readings - I dont know which of the 2 temps this is reading. All the temp monitors I use seem to tally with TAT leading me to believe that they're measuring t junction. From the article...

"We note that the temperature of the X6800 at 2.93Ghz with the stock HSF (heatsink and fan) is shockingly 56c or 8c over maximum."

If that is a measure of T-core(junction), then it's well within spec and t-case is actually 15C less...

...i think :?:

Oh and another thing...isn't saying that a C2D is only faster because of cache (even if its true) the same as saying, your car is only faster cuz it has a turbo...so what, its still faster, no?
April 25, 2007 9:24:06 PM

ok, guys, first of all i like how you bash the article even though it clearly states that core2duo is a better performer than x2. I think what the article is getting at is how everybody blows c2d wayyyy out of proportion when they look at certain benchmarks. Yes, c2d is a good performer and performs significantly better than x2, but all the article does is suggest that c2d might be blown out of proportion BECAUSE c2d is altered to do well in certain benchmarks.

the whole temperature thing was stupid though
April 25, 2007 9:31:23 PM

Quote:
ok, guys, first of all i like how you bash the article even though it clearly states that core2duo is a better performer than x2.

People are bashing the article because of its mistakes:

1) A misleading comparison of dual-core vs single-core processors to support their false belief that the cache is an artificial performance boost

2) Claiming the C2D is overheating and therefore causing Intel to not be able to release faster versions.
April 25, 2007 9:31:54 PM

Certain benchmarks? Such as?... What exactly are the ones that AMD does good in? Synthetic Cinebench?
a b à CPUs
April 25, 2007 9:33:06 PM

Quote:
ok, guys, first of all i like how you bash the article even though it clearly states that core2duo is a better performer than x2. I think what the article is getting at is how everybody blows c2d wayyyy out of proportion when they look at certain benchmarks. Yes, c2d is a good performer and performs significantly better than x2, but all the article does is suggest that c2d might be blown out of proportion BECAUSE c2d is altered to do well in certain benchmarks.

the whole temperature thing was stupid though


C2D is not altered to perform well in benchmarks. If anything the Pentium 4 was altered that way. Benchmarks (most notably Sisoft Sandra) would place the Pentium4 HT processor well ahead of the Athlon64 in it's tests. As wel know this was false as actual application benchmarks showed the Athlon64 to be the full on winner.

Would you then claim that the Athlon64 was built to just shine under benchmarks?

It's a stupid claim. Doesn't make sense and I've pretty much dis-proved it as the benchmarks C2D excels in are real applications. It is the Athlon64 (K8) that excels in Synthetic tests like Sciencemark. IT's the only place it comes close to C2D are synthetics.
April 25, 2007 9:41:11 PM

interesting conspiracy, nice catching subject.
But based on the article content, I think the subject line needs to be change into something else, I'm sure there is nothing embarrassing about c2d's performance.
April 25, 2007 9:41:59 PM

a b à CPUs
April 25, 2007 9:42:23 PM

He's from AMDzone... it pretty much is self-explanatory.. LMAO!
April 25, 2007 9:43:01 PM

Hence the quotes and the question mark. Meaning I'm in disagreement with the article while mentioning the name.
April 25, 2007 9:44:43 PM

Indeed. Embarrassing is that article.
April 25, 2007 9:47:16 PM

Quote:
My eyes hurt. Har har fanboy man. Core 2 is of no significance. Intel is evil. Everyone is against AMD. AMD is not hurting. :lol: 

hey at least his points are semi-valid.
unlike some fanboys who's just interested in posting "appreciation thread".... :roll:
April 25, 2007 9:49:25 PM

hey nice rebuttal. it'll really be interesting to see what scientia has to say.. :D 
April 25, 2007 9:52:54 PM

Everyone knows C2duo is better, in most ways, except price. It's a little surprising anyone needs to line up to say so. While it's old news X2's better at scientific stuff and good for general use, it seems everyone didn't know it! It's odd folks feel they need to defend C2duos, as if C2duos were in need of defense.
April 25, 2007 9:54:00 PM

Silly appreciation threads. Oh, and "semi-valid" points? Such as how AMD is going to crush Intel with DTX? lol
April 25, 2007 9:57:22 PM

hal, I agree that AMD is "good enough" for most. Can you explain this "scientific" BS that AMD fans tout? What's that, 0.1% of the market?
a b à CPUs
April 25, 2007 9:57:59 PM

This is so pointless.
1. How can INTEL optimize the CPU for better benchmark results??????
2. Why is the E6300+ are better than ANY AMD CPU in price/preforming?
3. Intel's architecture is much better than AMD (Look at the Penryn vs. Barc benchmarks)
3. Like others have said this guy is a fanboy
April 25, 2007 10:00:42 PM

What it the point here?
Now if I were making chips and someone had a test they ran through to determine the price of my chip I would damn well tune that baby to run the test as fast as possible. Seems that is exactly what Intel is doing with Core2Duo exactly what we as the PC community have asked for. "Benchies"
April 25, 2007 10:01:37 PM

Quote:
Oh, and "semi-valid" points? Such as how AMD is going to crush Intel with DTX? lol

nah.. what i meant by "semi- valid" points is that Scientia's question about Core 2's reliance on cache.

i'm sure there are some people who doesn't have this figured out (well at least i didn't when i read his post)

plus i wouldn't really classify him as a "fanboy", as he at least raised some interesting points.

fanboys are those who only posts "appreciation thread", or claims that Intel will BK by 2008 :twisted:
a b à CPUs
April 25, 2007 10:02:34 PM

Quote:
hal, I agree that AMD is "good enough" for most. Can you explain this "scientific" BS that AMD fans tout? What's that, 0.1% of the market?


That is also a lie. Clock for clock the Core 2 Duo beats out the Athlon64 X2 in sciencmark, but it's a VERY close fight.

It's also not that people feel they have to defend the Core 2 Duo. It's that people feel they have to defend the facts and stick by them. In this world of media and political manipulation, it's good to have one area where facts reign supreme (or at least should).

If the tables were turned my post would have been about the Athlon64 X2.
April 25, 2007 10:04:20 PM

Quote:
What it the point here?
Now if I were making chips and someone had a test they ran through to determine the price of my chip I would damn well tune that baby to run the test as fast as possible. Seems that is exactly what Intel is doing with Core2Duo exactly what we as the PC community have asked for. "Benchies"

yes if i were you i would do the same thing.
but by doing this you may spoof only 1 or 2 benchmarks, but not all of them. (pentium 4 is a very good example)
April 25, 2007 10:05:26 PM

Read his blog. If it smells like a turd, it's a turd.
April 25, 2007 10:11:43 PM

Quote:
Read his blog. If it smells like a turd, it's a turd.

i would say at least he backed it up with real benchmarks to prove his point, whether or not its a correct one. at least he tried.

EDIT: i'm not saying his points are correct. i'm just merely saying that he followed the standard procedure of debating. i believe at least he deserves some respect.

what i mean by "fanboy" is that a person who utilizes ignorance to proof his point, and uses name calling and profanity when his point is rebuttaled.
April 25, 2007 10:28:58 PM

Somewhere someone is reading that right now and believing; now that is scary.

I am of the opinion though that the x6800 and quads should be shipped with a better hsf or none.
April 25, 2007 10:29:39 PM

Instead of "stupid" or "fanboy", he says "silly" and "you make no sense" and "I give up you person that I can't make agree shut up" with and "I'm smart I'm a programmer" and "I'm old and wise".

Core 2 Duo -- The Embarassing Secrets
Intel -- The Monopoly Under Siege
Intel's Chipsets -- The Roots Of Monopoly
2007: Where Are The Clock Speeds?

etc etc

It looks like he puts forth effort in those novels he publishes. That doesn't make him any less of a shill.
April 25, 2007 10:38:47 PM

Booting depends or RAM. Games depend on GPU. Of course the CPU isn't going to have much effect on either.
a b à CPUs
April 25, 2007 11:05:53 PM

Quote:
I dont care what his blog says but its very believable to me.
My C2D at 3ghz performs no better in games (games We play that utilize only 1 core) than my Sempron3600+ single core did. It performs no better in everyday applications either. It doesnt even boot faster. However I dont doubt that when running applications that utilize two cores, it is faster. Its no 7th wonder of the world though.


Can you post a picture of your rig please.

Because that's bullsh!t!

You notice windows booting faster. Hell even at stock I noticed my old x6800 booting faster then my overclocked X2 4800+ @ 3.0GHz.

Even levels in games load in no time. There are no AMD users who enter games before me (BF2 or BF2142).

And your graphics card would not allow you to see a difference in gaming performance. Especially if you're running anything above 1280x1024.
April 25, 2007 11:12:57 PM

AMD fan boys will do anything these days against C2D
April 25, 2007 11:15:20 PM

Quote:
I dont care what his blog says but its very believable to me.
My C2D at 3ghz performs no better in games (games We play that utilize only 1 core) than my Sempron3600+ single core did. It performs no better in everyday applications either. It doesnt even boot faster. However I dont doubt that when running applications that utilize two cores, it is faster. Its no 7th wonder of the world though.


Your 7600GT has 12 pixel pipelines and a measly 128bit memory bus. You are GPU bound no matter how much you overclock. You might also want to play something like Oblivion or Supreme Commander. Then you will find out how much better your processor is.
April 25, 2007 11:56:51 PM

Real world... I have an E6600 that OC's to 3.2GHz stable, but I run it 24/7 at 3.0GHz (9x334) which is nothing special, but equivlant perf to the E6800.
What is cool tho is that I am doing it on air using speedstep, C1E and vanderpool tech all enabled, plus CPU is running 1.38v measuring idle temps at 27C and Max stress temps at 40C!
DDR 2.15v.
April 26, 2007 12:06:45 AM

One thing I know after reading this blog: Scientia has problems condensing his/her points. For freaking sake, it doesn't take a 3000 word essay to get across a few points, most of which are bogus anyway.

I guess Scientia is trying to make up for lack of quality with quantity. It's like if rabid fanboy assertions are repeated enough, they must be true. :roll: :lol: 
April 26, 2007 12:07:56 AM

If Core 2 is exceeding it's thermal limits, then what the hell do you call a 3.0 ghz Athlon X2?

Core 2 overclocks WAY BETTER than Athlon X2 at either 90 OR 65nm. So I really think AMD has more of an issue.
April 26, 2007 12:13:29 AM

Apparently, according to Scientia, SOI has better thermal tolerances than bulk silicon. I don't know if this is true, or to what extent, so I'll let someone knowledgeable like Jack comment on this.

Either way, the X2 6000+ consumes more power than a QX6800, I think that is more embarrassing than anything else.
April 26, 2007 12:21:26 AM

Quote:
Apparently, according to Scientia, SOI has better thermal tolerances than bulk silicon. I don't know if this is true, or to what extent, so I'll let someone knowledgeable like Jack comment on this.

Either way, the X2 6000+ consumes more power than a QX6800, I think that is more embarrassing than anything else.


I just don't get what he's getting at. He points out negatives about it, but then why does the thing overclock like a mother on stock air?

Also, if it's just a benchmark hack then why is it doing better in REAL WORLD benchmarks? Those are the ones I care about. Is it hacking all real world benchmarks?
April 26, 2007 12:30:51 AM

duh, we all knew Intel was just using smoke and mirrors to get suckers to fall for crappy C2D's... The holographic sticker hypnotizes users so when using the PC, time slows and it only appears that the PC is faster... Real AMD fans know and a LOL secure in the facts that AMD ALWAYS TRUMPS INTEL!!!
April 26, 2007 12:33:23 AM

Quote:
Well, Scientia might be around here lurking, but the article he posted on his blog is kind of interesting to say the least. I think its a good read and a good way to exercise our powers of deduction and reasoning. with that said, please reply thoughtfully to the thread and leave the bashing out. :) 

Although Core 2 Duo has been impressive since its introduction last year, a veil of secrecy has remained in place which has prevented a true understanding of the chip's capabilities. This has been reminiscent of The Wizard Of Oz with analysts and enthusiasts insisting we ignore what's behind the curtain. However, we can now see that some of C2D's prowess is just as imaginary as the giant flaming wizard.

The two things that Intel would rather you not know about Core 2 Duo are that it has been tweaked for benchmarks rather than for real code, and that at 2.93 Ghz it is exceeding its thermal limits on the 65nm process. I'm sure both of these things will come as a surprise to many but the evidence is at Tom's Hardware Guide, Xbitlabs, and Anandtech. But, although the information is very clear, no one has previously called any attention to it. Core 2 Duo roughly doubles the SSE performance of K8, Core Duo, and P4D. This is no minor accomplishment and Intel deserves every bit of credit for this. For SSE intensive applications, C2D is a grand slam home run. However, the great majority of consumer applications are more dependent on integer performance than floating point performance and this is where the smoke and mirrors have been in full force. There is no doubt that Core 2 Duo is faster than K8 at the same clock. The problem has been in finding out how much faster. Estimates have ranged from 5% to 40% faster. Unfortunately, most of the hardware review sites have shown no desire to narrow this range.

read the rest:
http://scientiasblog.blogspot.com/
© Scientia from AMDZone, April 15 2007

Comments?

Edited to save my ass from a copyright violation charge.

Ninja

I had to read that article several times since you requested no bashing. I kept coming to the same conclusion, so, sorry but I find it difficult not to bash. While I respect Sceintia as the least narrowminded, most logical and practical of the AMD fanboys/horde, the article is, in a word bizzare. Another word would be skewed. Yet another would be delusional.

First I will refer to Scientia as "it", since there has been an unverified rumor floating around that it is a female, and I wouldnt want to insult it

It starts the article claiming Intel has been misleading, that C2D is optimized to run benchmarks, not code. Aside from the fact that nowhere in the article did it present actual evidence to support that claim, I can help but wonder if Scientia realizes that benchmarks are in fact themselves code, albeit worthless code since they dont accomplish work. Rather, what it does is 'lightly' slam benchmarks for not testing in the manner which it sees fit, specifically overloading cache. Then, it gives a comparison of C2D cache to K8 cache, and explains why the K8s method of cache implementation is better.

Frankly, it just looks like another version of the pre-C2D release 'cashe thrashing' argument that the likes of MrsBytch(then known as MadModMike), 9-inch and so many other horde acolytes ran around crying about. Cache Thrashing, as you know, was disproven only moments after C2Ds release, if not before.

To futher support its claims, it actually uses factual data (benchs of the C400) as if to prove that the cache is where C2D gets its performance. Well Duh! That Intel vastly improved the prefetchers, ram and cache handling(memory handling) is not now, nor has been a secret for sometime, except perhaps to those who were only interested in reading AMD articles. So obviously reduction in cache would incur a perfromance hit. However, what Scientia does not note, IRT cache and cashe thrashing, is the peak. Weve all seen the tests that the jump from 2mb to 4mb cache helps C2D, but not nearly so much as one would expect. The extra cache exceeds the point of diminishing returns, which is interesting, since it esentially limits the possibility of the cache thrashing argument to a point below the low end C2Ds 2mb of cache. The 2mb resevoir is obviously enough since significantly more brings minimal gains, with no unusuall difficulties noted. Now if Scientia wanted to argue that the C400 was lacking sufficient cache, or suffers from performance problems due to a lack of cache, that would have been a legitimate argument, but to reverse that argument to imply that the shortage of that commodity, which is so vital to C2Ds performance, implies a problem with the Uarch itself is simply deviuos or ignorant.

Esentially, his entire aurgument in that paragraph is akin to someone saying, 'dont buy a GM V8, beacuse their economy 4 cylinders have been shown to lack power' :roll:


IRT the 2.93 GHz limit @65nm, it again provides no/tainted proof. Scientia starts by refering to the Thermal Guide in our own forum, which it fails to correctly identify. It then mixes this with data form Anandtech. Why this is interesting as while the guide defines 55'C as the TCase max, Intel itself defines TCase max(thermal specification at Max Thermal Design Power(TDP)) @ as 60.4'C
EE6800 specs

He bases his conclusion on the data for the stock cooler, which while not even remotely a poor way to judge the results clearly puts the burden of the test on the cooler itself, not the process. To say 2.93Ghz is the max clockspeed the process can safely attain because the stock HSF can only maintain TCase max under 100% load at that clockspeed is purely and simply assinine. It is a test of the HSF, not the Uarch. Technically, based on his presentation, the only real way to test that would be to run the CPU without any additonal form of cooling, including the heat spreader. In which case, not only would the C2D fail to meet its Tcasemax @ load, but any other recent processor (post 2000) would as well, either Intel or AMD produced!

Further Scientia fails claims that anything faster than 2.93Ghz is dangerous because of the heat, yet it fails to note that (in the case of the 6800) the TCase max of 60.4'C is when the thermal monitor activates, protecting the CPU from thermal overload.

I am very disapointed in Scientia....I respected it as a logical person, but this review is blatently tainted in 'Intel lies, AMD rulz'.
April 26, 2007 12:42:57 AM

I have never really posted here before because my knowledge on subjects discussed here is far outclassed by most of the other regular posters, and I have been content to just absorb what I can from those more knowledgeable than me. I have to make some comments here though.

If Intel is smart enough to be able to create a chip that mysteriously performs better on benchmarks than real world apps, wouldn’t they also be smart enough to make a chip that is faster in those applications as well? I mean if they can pay an engineer to map out the transistor paths that will make benchmark X go faster can’t they do the exact same thing with everything else? Despite what some people think the engineers in BOTH AMD and Intel are not mentally deficient.

The other thing I hate is how people continuously attribute one chip or another’s superiority to a single item. Like AMD’s IMC or C2D’s larger cache/FSB architecture. This reminds me of the stupid arguments I used to get into in HS with buddies who drove Japanese Import cars while I drove my old Chevy. If I had a dime for every time they pointed out I didn’t have overhead cams or fuel injection I would be rich. I don’t care how you do it, if it works it works, and it doesn’t invalidate the performance. If anything if you are being outperformed by an “inferior” out of date technology what does that say about you? I will take my in the block cam Small Block Chevy FTW any day of the week!

However, the worst thing about this blog entry we are commenting on is that Shraikouboob (who is mentally deficient BTW) is using it for fodder to fuel his inane ramblings on his blog. If nothing else this sin is inexcusable!
April 26, 2007 1:34:20 AM

The stupidest thing about his rant is the Celeron being 40% slower with the decreased cache. What a farce. The new Celeron 2.0GHz performs roughly equivalent to AMD's athlon64 3500, 2.2GHz CPU. That means the new Celerons are performing about 10% above the same clocked AMD K8. If scientia claims the Celeron to be 40% less performing than the Core2Duo, he them must also accept that Core2Duo is by default 50% faster than K8. Not that he'll ever admit that though (and he well shouldn't, because Core2Duo is usually only 20-25% faster). This proves that his mathematical deduction is wrong, which in and of itself completely discredits his whole argument of cache being the reason Core2 performs, which is a very old and worn out argument that has been discredited many, many times already.

Scientia should stick to this forum instead of AMDzone. He might actually learn something here.
April 26, 2007 1:43:14 AM

My pseudoscience detector went off the chart.
!