Sign in with
Sign up | Sign in
Your question

More dirty tricks from Nvidia? - Page 2

Last response: in Graphics & Displays
Share
a b U Graphics card
October 21, 2010 1:58:20 AM

Mousemonkey said:
You keep with this "six months late" thing like they were changing things in that time, they weren't, as once the design is laid down it takes eighteen months to two years to go from design to working silicon, and that six months that you like to keep on about was lost due to non working silicon not a redesign, what part of that do you not want to understand? :heink: 


So you know for a FACT that they didnt do ANYTHING in 6 months to improve tesselation performance? I know they werent 6 months late due to changing up design, that would make them far later, but its not even close to impossible they didnt rework things, wether SW or HW.
a c 271 U Graphics card
a c 171 Î Nvidia
October 21, 2010 2:06:40 AM

ares1214 said:
So you know for a FACT that they didnt do ANYTHING in 6 months to improve tesselation performance? I know they werent 6 months late due to changing up design, that would make them far later, but its not even close to impossible they didnt rework things, wether SW or HW.

Software is an ever-changing thing but the hardware isn't, once the designs were given to TSMC that was it, the ball was in the foundry's park to make that design a working reality which is something they failed to do for about six months.
a b U Graphics card
October 21, 2010 2:08:37 AM

Mousemonkey said:
Software is an ever-changing thing but the hardware isn't, once the designs were given to TSMC that was it, the ball was in the foundry's park to make that design a working reality which is something they failed to do for about six months.


Well they didnt fail to do for 6 months, they just took 6 months too long. I wonder when Nvidia gave them the designs anyway...
Related resources
a c 271 U Graphics card
a c 171 Î Nvidia
October 21, 2010 2:14:07 AM

ares1214 said:
Well they didnt fail to do for 6 months, they just took 6 months too long. I wonder when Nvidia gave them the designs anyway...

You need to go and check some history then, TSMC had problems with their 40nm process back in April/May 2009 when they were making the 4770 GPU and it took them until earlier this year to sort those problems out.
a b U Graphics card
October 21, 2010 7:56:42 AM

Mousemonkey said:
You need to go and check some history then, TSMC had problems with their 40nm process back in April/May 2009 when they were making the 4770 GPU and it took them until earlier this year to sort those problems out.


So how did ATI manage to get their cards out 9 months earlier?



See how Nvidia is slowing down overall fps, because it hurts them less? Same image quality, slower performance on ALL systems. If you support this you are just a fool.
a b U Graphics card
October 21, 2010 8:13:51 AM

jonpaul37 said:

If you will only buy Nvidia or AMD then you are only lessening your choices.

If you have several computers, it makes a lot of sense to standardize on one brand or the other, just to avoid driver problems when moving parts around.

I have three running systems using a 4870, GTX260, and an old 640 MB 8800GTS.
a c 271 U Graphics card
a c 171 Î Nvidia
October 21, 2010 10:38:07 AM

eyefinity said:
So how did ATI manage to get their cards out 9 months earlier?

http://home.akku.tv/~akku38901/HD6/6.jpg

See how Nvidia is slowing down overall fps, because it hurts them less? Same image quality, slower performance on ALL systems. If you support this you are just a fool.

Always resorting to insults is a sure sign that you know you've lost any argument.
a c 130 U Graphics card
a b Î Nvidia
October 21, 2010 11:20:16 AM

Of course there is a subtle differance between.
1. sending out a pack saying that AMD are cheating using an unofficial (as far as furmark are concerned) driver profile so could you use this hack with our cards to enable it as well for testing purposes.
AND
2. Our cards cant do that as well as Nvidia cards so dont use it as a test :cry: 

Mactronix :kaola: 
a b U Graphics card
October 21, 2010 11:25:06 AM

mactronix said:
Of course there is a subtle differance between.
1. sending out a pack saying that AMD are cheating using an unofficial (as far as furmark are concerned) driver profile so could you use this hack with our cards to enable it as well for testing purposes.
AND
2. Our cards cant do that as well as Nvidia cards so dont use it as a test :cry: 

Mactronix :kaola: 


True, however i think now there is a difference. If i understand things correctly, they have a way to tesselate better, or more efficiently. Perhaps this benchmark wont let them, or Nvidia doesnt want it to get out. Either way, if either one can make things better for all of us, i say go for it.
a c 130 U Graphics card
a b Î Nvidia
October 21, 2010 11:36:49 AM

ares1214 said:
True, however i think now there is a difference. If i understand things correctly, they have a way to tesselate better, or more efficiently. Perhaps this benchmark wont let them, or Nvidia doesnt want it to get out. Either way, if either one can make things better for all of us, i say go for it.


Yes but they could have just released a pack like Nvidia did pointing this out with out resorting to insinuating that Nvidia has people in its pocket and are trying to stitch AMD up over ONE thats ONE benchmark.
Seriously as if people are going to say oh look how good the Nvidia card performs in that single game, i must get one and i will ignore all the AMD results that show it in a better light.
We actually OK yes those people are out there :lol:  the majority of people who read reviews understand that one result doesn't make a review. They are just making themselves look bad by making such a big fuss over it if you ask me.

Mactronix :) 
a b U Graphics card
October 21, 2010 11:48:54 AM

Mousemonkey said:
Always resorting to insults is a sure sign that you know you've lost any argument.


Wait, you're saying you support this?

Let me be quite clear, so everybody can figure out exactly where everybody stands.

Nvidia is bribing software houses to deliberately slow down their own cards because they know it slows down AMD cards more, making sure everybody loses out.

Are you saying you support this action mousemonkey?

And you mactronix.
a c 271 U Graphics card
a c 171 Î Nvidia
October 21, 2010 11:52:27 AM

eyefinity said:
Wait, you're saying you support this?

Let me be quite clear, so everybody can figure out exactly where everybody stands.

Nvidia is bribing software houses to deliberately slow down their own cards because they know it slows down AMD cards more, making sure everybody loses out.

Are you saying you support this action mousemonkey?

I don't see the proof of your claims, all I can see is an AMD fan kicking up a fuss about a broken early build of a benchmark and insulting anyone who doesn't agree with him.
a b U Graphics card
October 21, 2010 12:03:25 PM

IT is so hard to be confident in a benchmark that has yet to be finallized or optimized. I think the best thing you can do is wait for them to finish the program first before you complete any assessments on prefering ATI or NVidia.
a b U Graphics card
October 21, 2010 12:03:48 PM

Mousemonkey said:
I don't see the proof of your claims, all I can see is an AMD fan kicking up a fuss about a broken early build of a benchmark and insulting anyone who doesn't agree with him.


I stated quite clearly that anybody who agreed with Nvidia slowing down all systems - their own and AMD's - was a fool, at which point you took it upon yourself to be annoyed.

What would you call somebody who agreed to shaft themselves for Nvidias gain? Don't skirt the issue, do you support Nvidia's actions in slowing down everybody's cards or not?
a c 130 U Graphics card
a b Î Nvidia
October 21, 2010 12:04:18 PM

^+1
Just because AMD say they have demonstrated to Ubisoft tessellation performance improvements that benefit all GPUs. Doesn't automatically mean the Nvidia GPU's are slowed by not using it now does it ?
Its totally possible that what AMD demonstrated was an improvement that did benefit all GPU's. From what base this is measured from is anyone's guess.
Its also totally possible that what AMD demonstrated, while showing improvements didn't let the Nvidia hardware run to its potential while what they have chosen to go with does but causes the AMD cards some issues.
How much of an issue is the thing here and only running the benchmark as is and then running it again after AMD pimp the drivers will tell how much of this is bluster and how much is actually true.
You would expect a company to want to get the best overall performance from both camps but if it came down to having even ( as in equal) lower scores/performance or higher ones with Nvidia and slightly lesser for AMD what you gonna do ?

Mactronix :) 
a c 130 U Graphics card
a b Î Nvidia
October 21, 2010 12:08:36 PM

eyefinity said:
I stated quite clearly that anybody who agreed with Nvidia slowing down all systems - their own and AMD's - was a fool, at which point you took it upon yourself to be annoyed.

What would you call somebody who agreed to shaft themselves for Nvidias gain? Don't skirt the issue, do you support Nvidia's actions in slowing down everybody's cards or not?



You prove its slowing down everyones cards then we can talk
Did anyone test with a S3 card yet ;) 

Mactronix :) 
a b U Graphics card
October 21, 2010 1:13:53 PM

mactronix said:
You prove its slowing down everyones cards then we can talk
Mactronix :) 



From the Nvidia email -

http://www.gamersdailynews.com/story-20361-HAWX-2-Direc...

Quote:
Quad patches with multiple displacement maps aim to render 6-pixel-wide triangles typically creating 1.5 Million triangles per frame not including planes, trees, and buildings!


Emphasis mine.

http://www.rage3d.com/board/showpost.php?p=1336399414&p...

Quote:
Thats all well and good but it completely neglects that sub 16pixel triangles don't show a visual increase in detail, meaning that by going to tessellation factors of say, 16, then you're just stalling the rasterizer and tanking efficiency for no IQ gain. This is true on both AMD and NVIDIA architectures.



Emphasis mine again.

Nvidia are so idiotic they actually sent the same email to all the review sites. This is quite technical stuff so the average reviewer will not know the difference between 6 and 16 pixel triangles. The reviewers at sites like Rage3Dd and B3D certainly do, however.

Their willingness to sacrifice their own cards performance just so they can "win" one benchmark is proof of how desperate they are getting.
October 21, 2010 3:28:12 PM

My thinking into this is following. Ati are into tesselation since 4-5 generations already, and I guess they figured out what the most efficient way of doing it is. And I agree with them that using tesselation at extreme levels does NOT increase IQ!!!

Just check Unigine with Moderate and Extreme tesselation, You can tell barely the difference if you switch between 2 screeshots, not at all when you look dinamic stuff.

and for what, 2x slower fps?!? You can use all the excess computational power to have better lighting which is the worst probably part of the games currently, or better texturing/shadeers etc. Whaever it is will imporve more the quality than doing excesive tesselation.

And I am SURE nVidia didnt implement Fermi with thinking of excess tesselation but more of computational beast, which as 2nd benefit allowed them to do more tesselation than ati. That is not bad but milking this factor alone when 1 year ago you said dx11 is not going to do anything is just pathetic.

For the sake of anything I dont want anything tesselated more that factor 3-4 because you are not going to see the difference or its not going to be worth it.

Please develop better lighting techniques!! (Quoting Crysis 2 which have some pretty interesting stuff coming)

Tesselation came to the party a little late I think as even without it some recent games have quite good models. It would have been great if we had tesselation 5 years ago.

Now move on to more raytraced looking in games
a b U Graphics card
October 21, 2010 8:48:02 PM

DX11 has been put down by a few people
I for one, havnt forgotten, and to raise it as a flag in victory now, well, lets let them decide what this looks like to others

Anyways:
nVidia hasnt had a DX11 capable card for devs to work on, in exactly what that means is, the mid range cards just werent here, so there couldnt be anything done about that, since nVidia chose its own path as well, and is more greatly effected by their design, meaning lower cards return lower tess perf, unlike ATI, which is set to overall perf of card, or, bottlenecjs elsewhere will appear before tess bottlenecks show up
Having said this, ATI also went the cheap route in their implementation of doing tess.
While I agree, Richard Huddys comments are right, period, even so, as has been seen in the past, moving into a new DX usually means adjusting your HW from gen to gen to increase what each DX brings
This was ATI's approach, and has been accepted since the begining of such things
Thing is, nVidia, which hasnt had a mid range card til recently, missed the poat on HW usage in many new DX11 games, and is now trying to use other methods to catch up, meaning a stupid bench is bad in a game that wont show it in the game itself, once drivers are set
Now, whats really really bad about alllll this is...
I'll have to read [H]'s review, as they wont use the bench, but the game itself

Shame on nVidia and any dev that follows suit, not for allowing this, but even to conceive of it being done

I hope ATI doesnt pull this garbage, if they do, shame on them as well

As for the 6xxx series, the real shame here is, the improvements seen on the new cards wont be seen in their crappy benchmark for HAWX2, which is counterproductive and foolish
      • 1
      • 2 / 2
!