Sign in with
Sign up | Sign in
Your question

Nvidia cheating on 3dmark?

Last response: in Graphics & Displays
Share
June 23, 2008 10:50:45 PM

Charlie Demerjian is there more that needs to be said. Theinquirer has less factual information than the Enquirer.
June 23, 2008 10:56:46 PM

I normally don't believe the inquirer either. I just thought people would be jumping on this since there is a lot of nvidia hate due to the 4850 being released.
Related resources
June 23, 2008 10:58:05 PM

bydesign said:
Charlie Demerjian is there more that needs to be said. Theinquirer has less factual information than the Enquirer.

Actually, that particular article makes some very good points concerning Physx. The fact that it's used to boost 3dmark itself is biased. UT3 is the only game that actually use physx to any signficant degree. Mainstream games most often used for benchmarking, like Crysis, World in Conflict, and CoD 4 use generic physics engines that runs off cpu. That Physx boosted 3dmark score would not reflect in-game performance for 99% of games.
June 23, 2008 10:59:24 PM

There isn't hate because of 4850, there is hate because of their constant greed, and refreshing of cards.

I don't care if they cheat (I think they cheated in the summer of 2007) as long as they don't cheat in real games.

Ati already has the flag for Picture quality, which is better than on Nvidia products.

Anonymous
a b U Graphics card
June 23, 2008 11:00:07 PM

its still not cheating
a b U Graphics card
a b Î Nvidia
June 23, 2008 11:12:10 PM

If it's against the futuremarks terms as outlined in the article, and it's recommended to reviewers for when comparing cards, then it's cheating.

It'll be called something else just like last time, so they'll have to refer to it as somethign like an error, err... floptimizations instead of cheating or else the Big Green Legal Team will be after Futuremark.

anywhoo, the moment that this became an option that influenced the final results was the moment that 3Dmark should be removed from any GPU review, because it's no longer a GPU test. Even the CPU test is now invalidated for board that might have different IGPs on them.

I never had much faith in bungholiomarks period, but to me this indalidates it even for those who thought it meant something, because now even comparing a GF9600GT to a GF8800GS will no longer give you an indication of the graphics cards themselves. Thus making the test no longer just invalid between IHVs but even between different models of the same IHV.
June 23, 2008 11:38:16 PM

I actually stopped using 3dmark as a benchmark when I OCed my system and the 3dmark06 went from 10000 to 8000. All my games showed improvement, even the crysis benchmark showed improvement, but not 3dmark06. Now I do it to watch the pretty pictures.
June 23, 2008 11:39:09 PM

Thatoneguy4 said:
I actually stopped using 3dmark as a benchmark when I OCed my system and the 3dmark06 went from 10000 to 8000. All my games showed improvement, even the crysis benchmark showed improvement, but not 3dmark06. Now I do it to watch the pretty pictures.

You should really find out why that's the case... :na: 
June 23, 2008 11:59:40 PM

I think that this should only be allowed when they have the kind of improvements in Futuremark that they have in games!

But anyway I don't even bother with all those benchmarks, only the simple stuff to test stability, my games are the real benchmarks!
June 24, 2008 12:11:10 AM

TheGreatGrapeApe may i ask you a question, cant the physics driver be programmed to the same language as the generic physics engine so the driver will be able to communicate to the game's physics engine and perform better??
June 24, 2008 12:32:43 AM

bydesign said:
Charlie Demerjian is there more that needs to be said. Theinquirer has less factual information than the Enquirer.



+2
a b U Graphics card
a b Î Nvidia
June 24, 2008 12:45:58 AM

I don't know enough about nV's physX implementation to answer that.
It would seem that while it's possible, it might require other tweaks more than just the driver, where you would need to tweak the actual application itself as well (otherwise it mighttreat the PPU the same as a GPU, and I doubt they are completely interchangeable just yet). As it uses what appears to be the stadard PhysX driver, it's likely somewhat emulated through the graphics driver, but to what extent I don't know.
Also rememebr there's no generic physics engine, there was the PhysX engine that was added to FutureMark's Vantage application, this isn't a generic API as if it were some Direct Physics test that was IHV agnostic.

However that's almost irrelevant to the question as it would relate to Futuremark setting out specific guidelines for their application and nVidia breaking those guidelines according to the article.
On the surface that means little as it's still a beta driver and nV could argue they were just testing things as Vantage being one of the few PhysX capable apps out there that allowed for fairly consistent testing, so this was more for educational/experimental purposes and not meant for the general public to use for official 3Dmark runs, as I'm sure their comments on this will state (along with mentioning not submitting the drivers for FM approval, etc).
However it's how nV promotes this driver that matters to the discussion, right now it's still considered a 'leaked beta' for the most part so the responsability/culpability isn't quite there yet since they can deny it ever being intended for public consumpition.

What would make it interesting to me would be if it allowed it to be installed alongside a card of another maker (like ATi or S3) where the primary display was the other company's, but the secondary card could be a GF8400 whose job is to do physics.

Anywhoo, I think futuremark will have to say something on this because as much as there is a large community of overclockers who will enjoy this just to get higher score in the HOF, it does pretty much once again cause FM the headache of the usefulness of its benchmark being called into question other than to simply get bungholiomarks. They'll either have to protect this strongly or else be further relegated to an irrlevant number like the Windows Experience Index. The wording of their Eula and partners' agreements were supposed to avoid any questionable floptimizations that FM didn't approve of and which they could quash. However I doubt there's anything they could ever do as long as nV doesn't make them part of the official WHQL driver set. They could release beta after beta after beta with them without ever having a legal issue IMO. And of course some review sites (like [H]) would use the betas as if they were WHQL since everyone knows that's what the enthusiasts are using anyways.

Hopefully it means the end of 3Dmarks in reviews, but I doubt that, as it's never seemed to stop people in the past.
June 24, 2008 1:04:22 AM

i love the end where it said all the 3 company in the fight is trying to hide something!lol hope they are hiding something "good".

thanks for the reply TheGreatGrapeApe. i still own you an appology!:) SORRY!!!
June 24, 2008 4:11:42 AM

Something tells me YouGamers forums are going to turn into a battlezone in the next couple of months. :D 

Also, the Inquirer? I would sooner believe the local hobo than that crap-o-rama. :sleep: 
June 24, 2008 8:36:58 PM

Hard to say... The FM predicts that Physicks is the next big thing in the games, so they put it in their new 3dmark program. ATI has said a long time that they will do physics with their GPU. he only problemis that Nvidia is the only company that has Physx at this moment. Is it possible to use Havok in 3dmark? I really hope so, or else the Vantage is not usefull.
In future we will see that ATI will allso make it's physics to work in the games.

It's almost same as saying that ATI cheats when it's using DX10.1 instead of DX10 like it's competititors... It's a new way of using the capasity of GPU, and as long as they say what they are doing it's not cheating as long as using DX10.1 is not cheating.

It seems that Nvidias Physx really works! http://www.tomshardware.co.uk/forum/251992-15-physx-980...

All in all we can plame MS for this, because they have not come with common physics API for windows. When it does it's easy to level the palyin field. The GPU that can give us best graphic and best physics at the same time would be winner, it allso would make it more easy to game developers to put more realistic physic effects in their game, just like they now start usin new dx10 effects in their newest games.
a b U Graphics card
a b Î Nvidia
June 24, 2008 8:52:05 PM

hannibal said:

It's almost same as saying that ATI cheats when it's using DX10.1 instead of DX10 like it's competititors... It's a new way of using the capasity of GPU, and as long as they say what they are doing it's not cheating as long as using DX10.1 is not cheating.


Not really, since Futuremark can restrict it to a DX10 path, and ATi running the shaders more efficiently by even changing the order would constitute cheating. Futuremark was pretty specific about what the PPU segment was supposed to do, and what it can't do (use GPU), and so doing it another way would be against their partner agreement. If this were to be correct, then ATi should launch a shader recompile to make FM's shaders more compatible to their hardware and run it at their speed, not at FM's.
Could you not call that cheating?

Quote:
It seems that Nvidias Physx really works! http://www.tomshardware.co.uk/forum/251992-15-physx-980...


No one is saying it doesn't work, Havoc works too, but the question is that for how this is used in 3Dmark Vantage it pretty much nulifies the benchmark except for the exact same setup.

Quote:
All in all we can plame MS for this, because they have not come with common physics API for windows. When it does it's easy to level the palyin field.


Do you really think M$ is going to make the effort now that intel and nV own two competing product, M$'s best idea for it would be to watch these two companies work out the kinks and worry about trying to gain support. M$ has little benefit to offer at this point, and neither intel nor nV would welcome them into the ring either.

This is just going to get worse before it gets better, especially when nV's uestionable future comes into play.
June 24, 2008 8:54:18 PM

ATI cards should use HAVOK to increase 3dmark scores if Nvidia raises it's scores with Physyx
a b U Graphics card
a b Î Nvidia
June 24, 2008 9:13:18 PM

intel & ATi could demo that it could be done, but as both the Futuremark and PhysX IP belongs to someone else, their effort would cause them legal nightmares.

Also it wouldn't be that easy, and it definitely wouldn't be as 'innocent' a transgression as nV could claim. To do so without FM's support would be a clear attempt to artificially boost scores, while right now nVidia can innocently say "we're just trying to apply our IP to PhysX, we will work on an application detect/disable function sometime in the future when we can dedicate some resources to it" (ie never).

Unless FM adds a patch for Havoc support, there's nothing intel & AMD could do without getting sued and causing PR headaches.
June 24, 2008 9:57:01 PM

TheGreatGrapeApe said:
Not really, since Futuremark can restrict it to a DX10 path, and ATi running the shaders more efficiently by even changing the order would constitute cheating. Futuremark was pretty specific about what the PPU segment was supposed to do, and what it can't do (use GPU), and so doing it another way would be against their partner agreement. If this were to be correct, then ATi should launch a shader recompile to make FM's shaders more compatible to their hardware and run it at their speed, not at FM's.
Could you not call that cheating?


I got the point. I was not too familiar with the rules... I go for 2 minutes penalty for this...

And I agree that the only solution that would remedy this would be that FM will make Havok patch for Vantage, (or remove physic part completely...)

I really was thinking that they did these physics parts just because they think that GPU physics is the next stage that computer GPU's will take in the future... I have been around from the times of the commadore vic 20 and the development has been guite fast since then. But the faults are still the same... not reading the manual... (this time vantage driver rules...)
June 24, 2008 10:43:02 PM

TheGreatGrapeApe said:
intel & ATi could demo that it could be done, but as both the Futuremark and PhysX IP belongs to someone else, their effort would cause them legal nightmares.

Also it wouldn't be that easy, and it definitely wouldn't be as 'innocent' a transgression as nV could claim. To do so without FM's support would be a clear attempt to artificially boost scores, while right now nVidia can innocently say "we're just trying to apply our IP to PhysX, we will work on an application detect/disable function sometime in the future when we can dedicate some resources to it" (ie never).

Unless FM adds a patch for Havoc support, there's nothing intel & AMD could do without getting sued and causing PR headaches.


Then futuremark should take out Physx if they aren't going to support havok. If they don't support it then they are much a Nvidia influenced bench and the pc gaming community should speak up.

3dmark scores are useless anyway but to a clueless person who measure 3dmark #'s instead of it's data might get the wrong idea.
a b U Graphics card
a b Î Nvidia
June 24, 2008 11:13:16 PM

Yeah I think that if FM doesn't react in a way that seem to bring balance, then it'll just be ignored even moreso than it is now.
a b U Graphics card
a b Î Nvidia
June 26, 2008 7:30:24 PM

So now it ATi's unapproved mod versus nV's unapproved mod.

What do you want to bet nV tries to put in countermeasures to disable the NGOHQ hack?
If they do that would prove they're just trying to help out more gamers right? :sarcastic: 

Of course they could even go sofar as to sue NGOHQ under the DMCA for reverse engineering their implementation if it's not coverd for by the SDK.
However to do so would so put a chill through the Dev community towards CUDA, it wouldn't be worth the effort IMO.
June 26, 2008 8:26:18 PM

L1qu1d said:
There isn't hate because of 4850, there is hate because of their constant greed, and refreshing of cards.

I don't care if they cheat (I think they cheated in the summer of 2007) as long as they don't cheat in real games.

Ati already has the flag for Picture quality, which is better than on Nvidia products.


Who cares if they cheat in real games! It's more power for the end user!

I think getting these massive scores with the CPU test is BS because it's made to stress the CPU, not the GPU..
a b U Graphics card
a b Î Nvidia
June 26, 2008 9:57:01 PM

doomsdaydave11 said:
Who cares if they cheat in real games! It's more power for the end user!


No it's not, it's less power to the end user, more fake numbers.
You want to make sure it's displaying what it should be, not simply inflating the benchies.
Either company could have the setting show high and DX10 and instead render low quality and DX9/8/7 partial precision or whatever, would that really be giving you more or less power?
The OPTION to have questionable optimizations but also know about them and also have the ability to disable them should you so wish.

This has been discussed many times over during the FX era and the shimmering GF7 era, and the end result is better performance is good, but questionable fl/optimizations need to be open to the public and options that can be controllerd/disabled by the end user.

June 26, 2008 10:43:35 PM

but what if the inflated number is actually reflect in games. would that still be cheating? since it obeying FM's rule of giving idea on the performance in real world.
a b U Graphics card
a b Î Nvidia
June 26, 2008 11:15:31 PM

That's the difference between opitmization and floptimization to me.

However an optimization may still be a cheat depending on the guidelines and role of the benchmark.

In the past ATi did a very good job of shader combining and re-ordering in their drivers to improve performance in some games, this is a good thing, however when they did those same things for 3Dmark (can't remember if 2003 or 2005), MadOnion/Futuremark took offence and said, that's not right. Now it's something that kept the same render result, was not dependant on camera angle or dependant on shader replacement, just reordering a call. Where the standard HLSL or app implementation might say do it ABCD, instead to make it more efficienct for their design, ATi told the driver to send it through as ADCB which made it faster. Everything is rendered correctly, and it is still processing everything properly regardless of input. And it's also what they do for games. So is it a cheat or what? IMO if either the developer or the user feel it's not kosher, then they can call it a cheat (needing to back it up with something more than name calling). However I would call it a legitimate optimization, not floptimization, which is why I made up the distinction (since everyone got their panties in a bunch over the word cheating).

I agree with FM on that call, however I don't understand how they call that cheating because it reorganizes the shader to flow differently through the GPU, yet when nV makes the PPU calls go through a GPU instrad (contrary to the 3Dmark licensing) why they said that they might approve of that as acceptable. To me it's not only at least the same, for the benchmark in question it's definitely not doing the same as the previous setup/task.

A dedicated PPU didn't suddently become a GPU when not in use, and in a game the GPU will likely never be dedicating all it's resources to physics. so it's artificially inflated in a way that will likely never occur in a real game. If it involved 2 GPUs and one only ever did physics and the other only ever did graphics, then it would replicate the old setup that the benchmark was built for, as well as also play that way in a game.
However since the game might let it go 60/40 GPU/PPU workload split but doubtful 0/100 in any game, then the inflated numbers experienced in benchmark won't reflect the improved performance in the game of adding a 10X more powerful actual CPU, nor will it give you the performance of a 2-5X more powerful standalong PPU that it appears to be in that benchmark.

While a little long, hope that explained it, as it's my more complex than "it's just cheating" position.

No my easy statement is, It's Bungholiomarks, they shouldn't count for anything to anyone other than a stability test for that one system.

And tha's the short and sweet of that one. ;) 
June 27, 2008 12:01:45 AM

thats great info from you thegreatape. i think i can see the situation in another angle now.

now i think maybe intel and amd got something similar, but nvidia are the "dumb" one to try it and got these results. oh well but i think one day its going to have a influence in game physics using GPU.
a b U Graphics card
a b Î Nvidia
June 27, 2008 12:18:16 AM

Oh yeah it's definitely the future, the question is how you go about implementing it. And I think that's what's causing the problems all around right now. I think this would be less of an issue if it were someone like M$ controlling the API through something 'more open' like DirectPhysics.
June 27, 2008 1:32:44 AM

you think Intel+AMD have something similar as wel,l like Nvidia's driver "optimization" for physics? because there are lots of speculation saying that they got a "dodgy" version of the driver somewhere in their company's computer database.
a b U Graphics card
a b Î Nvidia
June 27, 2008 2:36:42 AM

Intel has Havoc FX as part of the Havoc IP they bought. AMD has signed on to use Havoc as it has both a CPU and GPU option also, and IMO that's the more realistic long term side-by-side future in products like Larrabee and AMD's Fusion future. More similar development strategy. However I still would prefer an open standard not controlled by any of the hardware makers.

However it's very early in this competition. Most phsyics right now is done the traditional way, even those listed as PhysX titles (GRAW and UT3 both use CPU based physics as their underlying engines, GRAW=Havoc , UT3=Epic's own engine).

It's easier to develop for CPU right now, but the power advantage is obvious in the GPUs, so the question becomes what do developers go with as the 'killer app' and not just demo levels like the scant ones seen on GRAW and UT3.
June 27, 2008 7:33:35 PM

TheGreatGrapeApe said:
No it's not, it's less power to the end user, more fake numbers.
You want to make sure it's displaying what it should be, not simply inflating the benchies.
Either company could have the setting show high and DX10 and instead render low quality and DX9/8/7 partial precision or whatever, would that really be giving you more or less power?
The OPTION to have questionable optimizations but also know about them and also have the ability to disable them should you so wish.

This has been discussed many times over during the FX era and the shimmering GF7 era, and the end result is better performance is good, but questionable fl/optimizations need to be open to the public and options that can be controllerd/disabled by the end user.
That's not what I meant. I definately don't want inflated benchmarks, or the "Way it's meant to be played" trash in Assassin's Creed.

But if it's an edge that ATi can't get (as much as i'm loving ATi/AMD right now) like using PhysX in Games (again, not 3DMark CPU test because that's bs), then yes I would consider that good cheating, more power to the end user.

I do agree with you, Ape.

!