Sign in with
Sign up | Sign in
Your question
Closed

Nvidia Predicts 570X GPU Performance Increase

Last response: in News comments
Share
August 26, 2009 10:43:41 PM

Damn that's a lot.
Score
20
August 26, 2009 10:46:29 PM

I don't know why, but i don't doubt that
Score
19
Related resources
August 26, 2009 10:46:45 PM

that bugatti veyron looks awefully realistic
Score
-3
August 26, 2009 10:47:57 PM

ubernoobiethat bugatti veyron looks awefully realistic

To me it looks like something you would see in NFS, nothing impressive.
Score
17
August 26, 2009 10:58:01 PM

Now that I'd like to see :)  Really though, that's a bit outlandish. NVIDIA had better have something big up its sleeve to back up claims like that, because in my experience, nothing in this industry advances in leaps like that.
Score
20
August 26, 2009 10:58:57 PM

I just want to hear about they're next GPU already.
Score
18
August 26, 2009 11:01:24 PM

if they continue their recent course of a new chip ever other year (instead of every 6months back in the day), 570x looks like a typo...
Score
11
Anonymous
August 26, 2009 11:10:06 PM

Yeah right... Nvidia has some magical breakthrough up their sleeves, 570x the performance in the same power envelope, since there's no way they can dissipate anymore heat than they already are...

Intel already tried and failed at this kind of breakthrough, they whipped up a frenzy with their sham terascale demonstration, then later they realized that Larrabee was going to suck, then they quietly watered down expectations, and it wouldn't even surprise me now if they quietly cancelled it's release altogether.
Score
14
August 26, 2009 11:22:26 PM

Yeah... And the zune hd has 25 days of music playback time, right?
Score
4
August 26, 2009 11:34:33 PM

Here's my 'translation' of this article: The amount of GPU computing (using the GPU for non-graphic work) with increase by 570x. Considering the nearly non-existant uses for GPU based computing today, its definitely conceivable that there could be 570x more uses for GPU based computing. This just seems like whoever picked up the story took the figure out of context. Its technically true but not in the way we're reading it.
Score
21
August 26, 2009 11:52:42 PM

"a mere 570 times that of today's capabilities in fact, while CPU performance will only increase a staggering 3x in the same timeframe"

I think someone got "mere" and "staggering" backwards...

On topic: I think that this sounds more like marketing hyperbole. I agree with intesx81's assessment.

-mcg
Score
15
August 26, 2009 11:55:03 PM

will it run Crysis?...............Better?
Score
13
August 26, 2009 11:58:58 PM

570x might not be that unrealistic. NVidia already have a roadmap and probably some rough designs for stuff that they'll be releasing in 2015. The thing is, they've kind of got an easier job than CPU manufacturers. Purely because they only deal in highly parralelizable code. When Intel/AMD release a new CPU they can't just double the number of cores, they have to make those individual cores faster than before because a lot of tasks simply can't be parallelized well. GPU makers don't have these problems. They can literally double the number of number crunching units without making those units any faster and end up with a GPU that can do twice the number crunching work.

Think of it this way, right now nVidia's top spec GPU is the G200b found in the GTX 285. Its made using a 55nm fabrication process. Current expectations are that 22nm fabrication will take over in about 2011-2012 with 16nm coming in about 2018 at the latest. Using a 22nm process, the same size die as in the G200b could fit 8 copies of the G200b in it. 16nm would give 16. You could say that GPU makers get that kind of scaling 'for free'. Its then up to them to come up with advances in design etc. A 16nm process would also run a fair bit faster anyway without the use of extra cores.

16nm die fab = 16x as many processing units + faster processing units

Put together 16nm die fab + 6 years of refinements and R&D + larger dies if necessary = 570x increase not that far fetched a claim.
Score
1
Anonymous
August 27, 2009 12:15:33 AM

Spanky_Deluxe: That is some of the most epic fail math ever. 22nm will accomodate 2.5x as many transistors as 55nm, not 8x. Currently they don't believe that they'll get past 22nm due to quantum effects, so 16nm is a moot point until they tell us otherwise. Even if your horrible math was right, how do you explain the 570x increase with your theoretical 16x the transistors per die size? Quadruple the die size(not realistic, and yields would be horrible) to get 64x, then you still need to somehow wring 9x the performance per transistor of an already mature tech. When pigs fly.
Score
5
August 27, 2009 12:26:06 AM

That guy sure knows how to sell a lot of propaganda. I look for the nVidia stocks to soar tomorrow.
Score
0
August 27, 2009 12:29:48 AM

MrCommunistGen"a mere 570 times that of today's capabilities in fact, while CPU performance will only increase a staggering 3x in the same timeframe"I think someone got "mere" and "staggering" backwards...On topic: I think that this sounds more like marketing hyperbole. I agree with intesx81's assessment.-mcg


I think it was sarcasm. But you know how that doesn't always translate well through text.
Score
0
Anonymous
August 27, 2009 12:35:44 AM

I suspect they meant 570%, not 570x. 570% is only 5.7x, which is much more realistic.
Score
16
August 27, 2009 12:48:18 AM

That's a 2.88x increase every year for 6 years. I don't buy it.
Score
5
August 27, 2009 12:52:05 AM

schufferI suspect they meant 570%, not 570x. 570% is only 5.7x, which is much more realistic.


While I doubt the 570x prediction myself it is actually what he said. Or at least what his slide said. Check the pictures.

http://blogs.nvidia.com/nTersect/
Score
3
August 27, 2009 12:54:12 AM

I bet that there is gonna be 3x increase with 570 naming schemes in order to confuse us.

here's the proof:
Geforce 8800GT - Geforce 9800 - GTS 250
Score
5
August 27, 2009 12:57:20 AM

Spanky_McMonkeySpanky_Deluxe: That is some of the most epic fail math ever. 22nm will accomodate 2.5x as many transistors as 55nm, not 8x. Currently they don't believe that they'll get past 22nm due to quantum effects, so 16nm is a moot point until they tell us otherwise. Even if your horrible math was right, how do you explain the 570x increase with your theoretical 16x the transistors per die size? Quadruple the die size(not realistic, and yields would be horrible) to get 64x, then you still need to somehow wring 9x the performance per transistor of an already mature tech. When pigs fly.


It was a very rough calculation and besides which, I was doing it on an area basis. the xx nm refers to the minimum size of features that can be drawn, however its a length. Its like resolution. So if you can fit two times as many of something in width wise and two times as many things in length wise then you can fit a total of 4x as many things in in total.

So, if the length of things you're using is 55nm (apologies my original calculations were actually with 65nm) then going to 22nm could fit 6.25x the amount of stuff in and 16nm could fit 11.8x the amount of stuff into the same space. Of course, if you also increase the layer count proportionately (little unrealistic imo since layers are pretty thick due to all the extra stuff used) then the same volume of a 55nm chip could fit in 15.6x as much processing power with the switch to 22nm or 40.6x as much processing power with a jump to 16nm.

40x could easily become 80x with a doubling of the core frequency, 80x become 320x by making the dies four times as large (or simply having four GPU chips on one board), 320x could become 570x with improvements in design. Yes in some respects transistors are mature tech, however, when you get down to the 16nm level its not quite as "mature" since you have all kinds of quantum problems/advantages to deal with.
Score
0
August 27, 2009 12:58:21 AM

probably 570x compared to an intel i945G
Score
9
August 27, 2009 1:00:00 AM

unless something comes along that will completely change the GPU, like a new design or some kind of new materiel to replace silicon, or make the GPU even bigger(lets say the size of a motherboard :p ), no chance that its going to be 570x. not to say i don't want it to happen.
Score
-1
August 27, 2009 1:06:08 AM

with that much power, do we even need CPUs anymore?
Score
-1
Anonymous
August 27, 2009 1:07:47 AM

Its possible. The CPU has increased from 266Mhz P2 to the 3GHz processors now. This happened in less than 10 years.
Score
2
Anonymous
August 27, 2009 1:18:59 AM

Spanky, Spanky, Spanky.... Aside from still not exactly being correct on many points, that's still not realistic. That would be a 1000+ watt circuit the way you describe it, 4x the die size at twice the operating frequency with 4/8/16/whatever times the number of transistors firing isn't going to happen, period. The power savings from smaller transistors are offset by the increased number of transistors, if they make the die 4x bigger, they'd have to reduce the clockspeed by atleast half, not double it.
Score
1
Anonymous
August 27, 2009 1:28:20 AM

winter: No, because the circuit design and lithography processes are mature now, they weren't then. Unless something changes, the theoretical max is 22nm, which is half of what we're at today, so we'll get one good doubling of our CPUs(pretty much just more cores), and x86 won't see any significant improvements, the instruction sets are maxed out, there's no more revolution left, and clockspeeds won't exceed 4ghz.

The only possible revolution I see is if AMD's Fusion successfully integrates stream processing into the CPU with shared cache, where GPGPU offloading could then be done automagically for all applications by the CPU without having to copy data from main memory to GPU memory. Even then, there's no guarantee that it will consistently work well, only time will tell.
Score
3
August 27, 2009 1:49:12 AM

well i hope this is true
Score
-1
August 27, 2009 2:14:42 AM

I'd say its not that unrealistic. Its hard to make a 1-1 comparison however if you look at GPU performance numbers from 6 yrs ago compared to the ones today you are talking drasticly high numbers. I don't know that we see anything close to a 100x jump in performance every year for that time frame but you do see a major jump over the course of 6yrs.

Its pretty hard to imagine a 500x performance bump from what we have today. That would be a monster of a card!
Score
-1
Anonymous
August 27, 2009 2:18:53 AM

"Sounds like Huang is talking Star Trek!"

Bitchin'!
Score
0
August 27, 2009 2:25:59 AM

wtf does it matter when all the drones would rather the 90s graphics of their wii, cause its "cheap and cute"

this is good, but it still seems that the consoles are doing a better job of destroying gaming than the pc guys are doing to save it...
Score
0
August 27, 2009 2:43:42 AM

I didn't know the the G92 architecture could be pushed that hard--jk ;) . Looking forward to the future of computing.
Score
2
August 27, 2009 2:50:02 AM

I think the reason the claim was made was because of what a GPU is now and where he wants to take it.

I actually think its credible. The PPU right now isn't that effecient or complicated. As Tegra and OpenCL advance further, the PPU will get more effecient and complex. When you factor in a PPU is just about equivalent to a CPU, and video cards right now pack 600 of these. Its staggering the amount of raw processing power you have there.
Score
-1
August 27, 2009 3:22:27 AM

Now he has to make nvidia reach that goal else in 6 years it will haunt him! bwahahaha
Score
-1
a b Î Nvidia
August 27, 2009 4:05:30 AM

Perhaps he's one of those people who use the think the near future would be like the Jetsons too.
Score
-1
August 27, 2009 4:07:59 AM

But hey.... will the future GPUs be able to play CRYSIS 2?
Score
1
August 27, 2009 4:13:47 AM

tipooTo me it looks like something you would see in NFS, nothing impressive.

ha jokes on you, that's an actual photograph!!!!!!!
Score
-1
August 27, 2009 4:29:11 AM

Well, if is even 10% right, it will be incredible. Maybe stuff like Ghost in The Shell isn't so far away (crossing my fingers and laughing like a tech nerd in palpitation)
Score
-1
August 27, 2009 4:58:05 AM

This is like Intel saying that Pentium 4 will scale to 10 GHz before Intel was forced to abandon the Netburst processors.

I call BS on 570x but wouldn't be surprised that GPUs are growing faster than CPUs, but not by that much. I'd say ~200x max.
Score
3
August 27, 2009 5:40:46 AM

CRAP!!! I knew I should of waited to get a new video card... No but really I can see it now... gtx650 in 5 way sli... 10000 watt power supply. and all my lifes income paying the electric company.
Score
-1
August 27, 2009 5:59:14 AM

Jen-Hsun has seen ATI:s plans?
Score
3
August 27, 2009 8:07:44 AM

give me one of those gpu's, i would like to see the cpu bottleneck it :) )
Score
-1
August 27, 2009 8:19:07 AM

intesx81Here's my 'translation' of this article: The amount of GPU computing (using the GPU for non-graphic work) with increase by 570x. Considering the nearly non-existant uses for GPU based computing today, its definitely conceivable that there could be 570x more uses for GPU based computing. This just seems like whoever picked up the story took the figure out of context. Its technically true but not in the way we're reading it.


totally agree, read the nvidia blog provided by NocturnalOne, he talks of the 570x increase in CPU+GPU performance, it's in the slide...

Score
-1
August 27, 2009 8:48:35 AM

20FPS in Crysis with Nvidia's best card now multiply that by 750X and if my math is correct in 6 years it would be about 1 million frames per second.:) 

Seriously though, wouldn't Photo Realistic games be not to far behind?
Score
-1
August 27, 2009 10:18:31 AM

i think you guys have misunderstood his statement. he wasn't saying that a graphic card in 6 years will be 570 times faster than the fastest today. he was saying the amount of calculations done on GPUs in the future will be 570 times the amount of calculations done on GPUs today. i think this will be due to people using GPUs a lot more instead of CPUs to do calculations and this will be helped by OpenCL and cloud computing which should finally make it easy for people to use GPUs for general purpose calculations and should also make it possible for these processors to be fully utilized all the time instead of sitting idle most of the time when people are not playing games on them.
Score
-1
August 27, 2009 10:47:25 AM

Spanky_McMonkeySpanky_Deluxe: That is some of the most epic fail math ever. 22nm will accomodate 2.5x as many transistors as 55nm, not 8x.

They are usually build in two dimensions.
55 nm * 55 nm = 3025 nm^2
22nm * 22nm = 484 nm^2
Increase: 3025/484 = 6.25x

You just have to love when little narcissistic kids make complete fools of themselves.
Score
2
Anonymous
August 27, 2009 12:22:10 PM

LeJay: Are you an electrical engineer? Are transistors perfectly square? Do they leave proportionately less space between them when they are smaller? Go dig up any of AMDs or Intels 90nm and 45nm chips, and compare the number of transistors to the die size, then report your findings back here, including the math. Right now you're just some smug little twat who can do multiplication, I'm not particularly impressed.
Score
-1
August 27, 2009 1:08:07 PM

Although it might be technically possible, I doubt 570X increase will happen. It wouldn't make sense from a $ standpoint. Nvidia comes out with something then about6 months later AMD has to come out with something better & vice versa. The companies are coming out with usually only 50-100% increase in GPU power each time which is what they need to just stay ahead of they other guy. At 100% increase per year we are talking 128X increase in 6 years. It would cost too much to increase 200% per year which would be 3X performance per year. It would be nice if it would happen but just don't see it happening at present rate. It would be interesting to see how much faster the GPUS are today are compared to 6 years ago.
Score
-1
!