nichomach

Distinguished
Mar 22, 2001
5
0
18,510
While I'd note that the P4 2.53GHz appears to have a lead over the AthlonXP 2600+ - varying from marginal to commanding - it seems as though at least one set of commonly used benchmarks has certain flaws which may have been delibarately introduced. <A HREF="http://www.vanshardware.com/reviews/2002/08/020822_AthlonXP2600/020822_AthlonXP2600.htm" target="_new">Van's Hardware</A>

I know this is from Van Smith, and thus will be regarded with a certain *ahem* scepticism :D, but the AMD presentation they're hosting does raise some questions that IMO need answering with regard to Bapco Sysmark 2002. Any comments?
 

slvr_phoenix

Splendid
Dec 31, 2007
6,223
1
25,780
Comments? Well other than Van Smith is a ...

Oh, sorry. ;)

Seriously though, some of Van's arguments are kind of (or totally) stupid. When was the last time that anyone remembers even working with a file that was less than a meg in size (or a meg in memory for compressed file architectures)? Todays apps are sucking up the available storage mediums and <i>most</i> people are using those apps. So weighing the tests towards more bandwidth testing than before makes perfect sense to me, as <b>real-world use</b> of PCs use more bandwidth than they used to.

The Excel sorting point of his is totally nuts in my opinion. I use Excel at <i>least</i> 50% for sorting. Not usually large sorts, but sorts none the less. But then, maybe I'm rare? Who knows. Maybe I'm not.

And the concept of basing the weight of a test on how long it takes to complete makes a lot of sense to me. Who cares how long a background task takes? Who cares if a task takes 300 miliseconds or 500 miliseconds? It's the tasks that take noticable amounts of time that matter. So a perfect way of designing a benchmark to reflect this is simply to weigh the result of a benchmark according to how long it takes to run.

It's the same thing that I do when I try to decide what areas of code to optimize because it puts precidence in the areas that a user will most notice when running my software.

More importantly, if a processor totally sucks at something, then it'll take longer to run that benchmark. Then that suckiness is brought more to the surface, and benchmarks that the processor ran really well will make less of an impact on the end score. So it seems like a perfectly logical way to score a benchmark to me.

Obviously, other points hold much more merit than this, like why certain filters were removed and replaced with repetitions of the same filters. That one obviously needs fixing if we are to ever again believe that it is a non-partisan benchmark.

The rest of it though, really, just sounds like AMD and/or Van whining over the fact that Athlons <b>do</b> in fact have some deficiencies and benchmarks <b>are</b> actually showing them. Big whoop. If AMD doesn't like it, then why don't they just increase their bandwidth, fix their implementation of SSE, and add in SSE2? <b>That</b> would be a real kick in the teeth to Intel!

Whining about how a benchmark points out the weaknesses in their CPU architecture though is just plain childish.

We've had weaknesses in P4s such as the Willamette (and now Wilty) and even Northwood As for a while now, and they've been reemed in benchmarks in the past. Intel is finally fixing them, and now we get whining from AMD that their chips aren't being benched in a way that only shows their chips advantages? Boo hoo. "Welcome to the club." is what I'd expect Intel to say, if they were so smug.

<pre><A HREF="http://www.nuklearpower.com/comic/186.htm" target="_new"><font color=red>It's all relative...</font color=red></A></pre><p>
 

bront

Distinguished
Oct 16, 2001
2,122
0
19,780
The rest of it though, really, just sounds like AMD and/or Van whining over the fact that Athlons do in fact have some deficiencies and benchmarks are actually showing them.
What? Someone whining over benchmarks? Nah, never happens :wink:

If AMD doesn't like it, then why don't they just increase their bandwidth, fix their implementation of SSE, and add in SSE2?
Some of this is easy too. Simply increase the FSB. It doesn't even need a redesign of the core, simply an adjusting of the multiplier and a few other designations. For future reference, they can start using QDR on things like hammer and give them the power to interface with the faster memory.

I predicted all the way back in January that 200 Mhz DDR (DDR 400) would be what would be needed in DDR systems in January 2003 and wondered why AMD wasn't aiming to hit that market with hammer. It looks like now they've just gotten lazy and decided that they'd rather not pump up the FSB, especialy when it would make their PR rating jumps different, and MHZ labeling a bit more awkward (133/66 is easier to deal with than 166/83 for most people).

Now, with T-breds that can hit 166, and a relabeling of the PR rating, they had the perfect oppertunity to do it, and they failed. It's obvious if you look at the OCed Athlon CPUs, that there is a lot to gain from an FSB boost with the current Athlon Architecture, as there were some jumps in performance that were much greated than a simple MHZ increase would have done.



The Boogie Knights: Saving beautiful monsters from ravoning princesses since 1983.