I keep getting very confused about how journalists suggest that benchmark programs like 3dmark are poor judges of a cards performance. There must to be some kind of consistent, identical test to accurately compare competing cards. It seems like Lars is suggesting that everyone run their own benchmarks on their own games to attain real information....but how can you compare the results from individual benchmarks running what could be drastically different in-game scenes? The results would be so skewed they would be useless. His other solution of having Futuremark optimize their code for both Nvidia and ATI cards is simply ridiculous. Futuremark does not have the time or resources to write individual code for each iteration of every new card or technology. And what about competition??? Do we all really want to have another Intel vs. AMD war going? What about SiS cards, or any other wet behind the ears manufacturers? At best buy I sell cards to people all the time who just like Lars said, say hey 256MB has GOT to be better than 128MB. They fall for unlabeled bar graph comparisons on card boxes showing claims like "Over 50% faster!".....faster than what!? The fault lies with the consumer for being fooled, and not taking the time to listen to a technical explanation. So i say keep 3dmark, and run the benchmarks using only Microsoft hardware labs certified drivers. I dont think people are going to switch drivers for each game they are playing. Maybe that would clear some things up.
"Who is General Failure, and why is he reading my drive?"
"Who is General Failure, and why is he reading my drive?"