Sign in with
Sign up | Sign in
Your question

Nvidia 400 series crippled by Nvidia

Last response: in Graphics & Displays
Share
April 8, 2010 2:55:36 AM

In an effort to increase the sales of their Tesla C2050 and Tesla C2070 cards, Nvidia has intentionally crippled the FP64 compute ability by a whopping 75%. If left alone, the GTX 480 would have outperformed ATI's 5870 by at least 20%. Here is a link to Nvidia's own forum where we have been discussing this. You will see there is also a CUDA-Z performance screenshot confirming it, on top of the confirmation by Nvidia's own staff. Nvidia is not publically making anyone aware that this is the case. Anandtech also snuck the update onto page 6 of a 20 page review of the GTX 480/470.
a b Î Nvidia
April 8, 2010 2:58:09 AM

I think that was reported months ago.
April 8, 2010 3:09:05 AM

It was rumored. Anandtech confirmed it on the 30th. It has not been picked up on by any site other than Anand, nor has been admitted in any specs released publicly by Nvidia. They are trying to keep it quiet until those that do care are stuck with the cards.
Related resources
a b Î Nvidia
April 8, 2010 3:14:06 AM

Ah, I see. Sorry then for the post.
April 8, 2010 4:02:24 AM

actually, i was thinking of getting one of the GTX480 until this rumor started (now confirmed) just to develop apps that can use OpenCL (better to use since it is vendor neutral0 and would have loved the DP performance
April 8, 2010 11:40:05 AM

Agreed. Personally I ws looking forward to completeing the Milkyway@home project much sooner than expected. That would've been a good day
a b Î Nvidia
April 8, 2010 1:07:07 PM

Well, you still have the 5870 or 5970 :D 
April 8, 2010 1:20:17 PM

You're right :)  Problem is, up until 3 days ago i was an Nvidia fanboy :(  So i've held out, waiting for the 400 series that was supposed to be so awesome. I didn't care how hot it was, i'd just get the EVGA hydro copper. I didn't care how much power it used, i have an efficient power supply. But reducing FP64 by 75%? That's just unacceptable. Of course that matters most in number crunching. In games, it means either less accurate physics, or slower physics rendering.
a b Î Nvidia
April 8, 2010 1:24:59 PM

So has anyone done a head to head comparison of these cards in FP64 apps?
April 8, 2010 2:01:15 PM

No :/  everyone is too concerned with benchmarking it for games. There was an OpenCL FP32 benchmark done, and the 480 performed very well. However, OpenCL 3.3 doesn't support FP64 so it couldn't be truly tested. Of course, ATI supports OpenCL 4.0, which DOES work with FP64. Nvidia promises to have it soon, but it's just more waiting. It's been long enough. I'm so tired...
April 8, 2010 2:08:54 PM

Also, just a quick little FP64 comparison:
un-gimped GTX 480 672.708 GFLOPS
ATI HD5870 = 554 GFLOPS
Actual GTX 480= 168.177 GFLOPS

The GTX 480 would have been more than 21% faster than ATI's HD5870
a b Î Nvidia
April 8, 2010 2:12:36 PM

Yikes, and now it is completely decimated. Did they really think this would boost tesla sales?
April 8, 2010 2:18:20 PM

The logic is beyond me. Maybe you'll understand: :) 

Instead of running CUDA apps on a GTX 470 ($350) you should get a C2050 ($2499)
Instead of running CUDA apps on a GTX 480 ($500) you should get a C2070 ($3499)

Basically, you pay 7x more for 4x the FP64 (and tech support you won't need. That's what forums are for. Cuda has been around long enough)

At SETI.USA, we have our own tech support (as many places do). I'll just buy 7 ATI's instead.

a b Î Nvidia
April 8, 2010 2:35:42 PM

The gaming cards will be bought by gamers, and the scientific community will have to spring for the workstation cards. :) 
a b Î Nvidia
April 8, 2010 2:58:05 PM

Or ATI.
April 8, 2010 11:40:22 PM

Wow... I wonder if there will be a BIOS/driver hack to enable this?
April 8, 2010 11:44:50 PM

It appears their decision is final until they start losing market share (which they will). At that time, i'm sure they'll reverse it.

By the way, for anyone intersted, Integer speed (24 or 32 bit) is half of FP32 speed.
April 8, 2010 11:47:30 PM

It should be hackable through BIOS/driver. Surely someone will figure it out who's more familiar with exactly how the gimping was done. *Cheers on a rogue Nvidia employee*
April 8, 2010 11:53:11 PM

JohnPMyers said:
It should be hackable through BIOS/driver. Surely someone will figure it out who's more familiar with exactly how the gimping was done. *Cheers on a rogue Nvidia employee*


not necessarily, it could be totally disabled in hardware

EDIT: it also would not have looked as bad if it had been half speed as opposed to quarter speed, that would be acceptable
a b Î Nvidia
April 9, 2010 2:21:54 AM

Shows how once again their marketing prefers to gimmickize their excellent ideas rather than give someone too good a deal.
April 9, 2010 2:26:42 AM

i can't say I am surprised anymore
April 9, 2010 4:30:25 AM

Its not like double percision affects gaming FPS. Unless you are a professional making 3D models looking for a cheap way to get amazing double percision performance. The gimp isn't to impact normal consumers.
April 9, 2010 5:02:45 AM

rofl_my_waffle said:
Its not like double percision affects gaming FPS. Unless you are a professional making 3D models looking for a cheap way to get amazing double percision performance. The gimp isn't to impact normal consumers.


yes that is true, but as a developer (not games, highly threaded applications) i look forward to new devices and ways to get more performance from a computer, and having 650+ GFlops of DP compute performance would have been nice
April 9, 2010 12:49:39 PM

So the issue isn't heat, its the fact they want their Tesla products to sell?
April 9, 2010 1:05:24 PM

JohnPMyers said:
The logic is beyond me. Maybe you'll understand: :) 

Instead of running CUDA apps on a GTX 470 ($350) you should get a C2050 ($2499)
Instead of running CUDA apps on a GTX 480 ($500) you should get a C2070 ($3499)

Basically, you pay 7x more for 4x the FP64 (and tech support you won't need. That's what forums are for. Cuda has been around long enough)

At SETI.USA, we have our own tech support (as many places do). I'll just buy 7 ATI's instead.



Does SETI work well on ATI cards? I know with Folding@Home Nvidia cards are far more suitable for this type of work.
April 9, 2010 2:27:09 PM

SETI@Home does not, at this time. However, SETI.USA is a BOINC crunching team and we work on many other projects than just SETI@Home. The project that will be hurt the most is Milkyway@Home, which is trying to create a 3D map of the galaxy. ATI currently runs those work units very fast and is at the top of the leaderboards, where they will stay, because of Nvidia's decision.

Some may say that cutting FP64 has no effect on games. You're wrong. PhysX relies on FP64. Using FP32 causes a loss of accuracy of 8 digits. Game creators have limited the amount of PhysX they put in their games because of Nvidia's inability to complete the calculations in real time if the game designers had gone full-scale with it. The game itself will play just fine, but the game would have been made better in the first place if not for this.
April 9, 2010 2:33:15 PM

I'm getting an Asus 480 on Monday, i'll let everyone know what it's like. I'm mainly a gamer so i don't know how much of this FP64 issue will really affect me. Barely any games use Physx in a massive way anyway. I think i might make my next card an ATI.
April 9, 2010 3:06:40 PM

Griffolion said:
I'm getting an Asus 480 on Monday, i'll let everyone know what it's like. I'm mainly a gamer so i don't know how much of this FP64 issue will really affect me. Barely any games use Physx in a massive way anyway. I think i might make my next card an ATI.


well, it wouldn;t matter since for gaming its unimportant, and even PhysX uses SP (FP32) calculations anyways, or else it would have destroyed gpu's

its custom apps that need the FP64 for precise calculations
April 9, 2010 3:18:30 PM

Oh i see. Its a pretty Nazi thing to do by NVidia, its essentially creating a false economy for everyone but themselves. If they keep down this road, their morales and ethics won't be the only things that are bankrupt..
April 9, 2010 3:24:07 PM

Griffolion said:
Oh i see. Its a pretty Nazi thing to do by NVidia, its essentially creating a false economy for everyone but themselves. If they keep down this road, their morales and ethics won't be the only things that are bankrupt..


well, i see it as there way to stop people from using the GTX470/480 for workstation FP64 computing and making them spend the extra for the Tesla cards, because if they didn't do that, the only reasons to get the tesla's would be for the product support (which you could find community based anyways) and possibly more memory (though the GTX480 has 1.5GB, not a small amount)
a b Î Nvidia
April 9, 2010 3:25:25 PM

EXT64 said:
I think that was reported months ago.


Yeah it was reported, and widely discussed, and even confirmed by nVidia weeks before launch, however everyone still liked to talk about how much of a GPGPU monster it would be.

It's kinda like the HD5770 which was also crippled (totally removed for now).

A quick example of the crippled GTX480's performance can be found in the HotHardware review;
http://hothardware.com/Articles/NVIDIA-GeForce-GTX-480-...



Funny how they attribute it to a driver issue and not to the actual crippling of the card. :pfff: 

This means the SP apps will still run full speed, but DP ones like many of the Boinc clients will not.

JohnPMyers said:
Nvidia is not publically making anyone aware that this is the case.


They are not about to trumpet their limitations anymore than ATi would (although most reviews were upfront with the lack of DP in the HD5770). They're not about to enlighten users and make them aware it's not the F@H monster they think it is.

And it's nice to see you mention MilkyWay@Home, it's one of the few clients that has enabled the LDS on the HD4K & HD5K card which shows their true potentially not the crippled clients like the current GPU2 F@H one which has sorely needed updating for the past 2 years.
April 9, 2010 3:28:50 PM

TheGreatGrapeApe said:
And it's nice to see you mention MilkyWay@Home, it's one of the few clients that has enabled the LDS on the HD4K & HD5K card which shows their true potentially not the crippled clients like the current GPU2 F@H one which has sorely needed updating for the past 2 years.


and that makes me sad since i fold for Tom's with 2 x 4870's when not gaming (which is about 20hrs/day)

can't wait for GPU3, assuming they will be using opencl
a b Î Nvidia
April 9, 2010 3:47:49 PM

GPU3 should be a different client than OpenCL, they are supposed to be two separate projects, but updates are few unfortunately and it's taking forever, so they might be dropping the dedicated GPU3 and just adapting the OpenCL client as GPU3. The OpenCL client should be a bit more universal (CPU & GPU) so hoping that's what comes out first.

However the last report was the big problems with the OpenCL client so two separate paths may still be a good idea, but they dropped development for Brook GPU, so not sure how that will work;
http://folding.typepad.com/news/2010/01/important-updat...
April 9, 2010 4:02:06 PM

TheGreatGrapeApe said:
GPU3 should be a different client than OpenCL, they are supposed to be two separate projects, but updates are few unfortunately and it's taking forever, so they might be dropping the dedicated GPU3 and just adapting the OpenCL client as GPU3. The OpenCL client should be a bit more universal (CPU & GPU) so hoping that's what comes out first.

However the last report was the big problems with the OpenCL client so two separate paths may still be a good idea, but they dropped development for Brook GPU, so not sure how that will work;
http://folding.typepad.com/news/2010/01/important-updat...


well, i do hope they get everything worked out, and maybe have just one client for a computer using opencl, though i wouldn't mind having one for my gpu(s) and one for my cpu, just to be able to fully use the 4870s i have
a b Î Nvidia
April 9, 2010 4:43:53 PM

With the OpenCL client you should be able to use CPUs and GPUs simultaneously, but I don't know what issues they are having so if the implementation requires alot of work like the GPU ones did maybe it won't be as universal as promised/hoped.
!