cefoskey

Distinguished
Jan 20, 2003
440
0
18,780
I keep getting very confused about how journalists suggest that benchmark programs like 3dmark are poor judges of a cards performance. There must to be some kind of consistent, identical test to accurately compare competing cards. It seems like Lars is suggesting that everyone run their own benchmarks on their own games to attain real information....but how can you compare the results from individual benchmarks running what could be drastically different in-game scenes? The results would be so skewed they would be useless. His other solution of having Futuremark optimize their code for both Nvidia and ATI cards is simply ridiculous. Futuremark does not have the time or resources to write individual code for each iteration of every new card or technology. And what about competition??? Do we all really want to have another Intel vs. AMD war going? What about SiS cards, or any other wet behind the ears manufacturers? At best buy I sell cards to people all the time who just like Lars said, say hey 256MB has GOT to be better than 128MB. They fall for unlabeled bar graph comparisons on card boxes showing claims like "Over 50% faster!".....faster than what!? The fault lies with the consumer for being fooled, and not taking the time to listen to a technical explanation. So i say keep 3dmark, and run the benchmarks using only Microsoft hardware labs certified drivers. I dont think people are going to switch drivers for each game they are playing. Maybe that would clear some things up.

"Who is General Failure, and why is he reading my drive?"
 
WHQL certified drivers contained cheats so that not an answer. Futuremark 'could' optimize for ATI and Nvidia, that's supposed to be what the members are supposed to contribute (info and skills); BUT I don't like that idea either bcause it still doesn't tell you how it compares to non-Nvdia non-ATI cards.
Also the fault DOESN'T lie with the consumer for being fooled as even the 'edjumakated' reviewers and even this community were fooled, so that doesn't wash.

The problem is optimizations, not just synthetic benchmarks or just Futuremark. An optimized Quake3 or Splinter Cell, or UT2K3 is not a predictor of performance in Doom ]|[ anymore than 3Dmark is.
The suggestion that everyone run their own benchmarks is not new, in fact I suggested it a while ago and even wrote to Kyle Bennett about it after his article at [H]. The only true (or even CLOSE) way to find out performance of one card over the other will be to get both cards and then RMA the lower performing one (for whatever reason you can make up [compatibility, etc]), although this may not be financially possible for alot of people. Because even a 3rd party tester's test is susceptible, all reproduceable 'in game' tests are susceptible to cheats once the standard becomes known to ATI and Nvidia. So what are we left with. Less information that we felt we previously had (although alot of that was tainted too).
Now it WILL simply be that you are buying a product based on PR and company image. There will be no obejective way to discriminate between cards, just the Subjective opinions of reviews who will report on 'playability' or 'experience' as they call it at [H]. Someone else's opinion on the 'experience' is not going to translate to all gamers.

As for your AMD/Intel analogy they are at least colaborating together on SYSmark 2003 so THEY have come to terms with proper benchmarking.

I think the only way this is going to get cleared up is if the IS court action for public statements these companies make, but with that in mind it's likely they will cut back on their public statements too, and just let the reviewers make the statements on their behalf.

Basically we're all $crewed!


- You need a licence to buy a gun, but they'll sell anyone a stamp <i>(or internet account)</i> ! <A HREF="http://www.redgreen.com" target="_new"><font color=green>RED</font color=green> <font color=red>GREEN</font color=red></A> GA to SK :evil: