Sign in with
Sign up | Sign in

Nvidia Predicts 570X GPU Performance Increase

By - Source: Tom's Hardware US | B 101 comments

GPU performance will increase up to 570x in the next six years.

TG Daily is reporting that Nvidia CEO Jen-Hsun Huang made an astonishing prediction, claiming that GPU computing will dramatically increase over the next six years, a mere 570 times that of today's capabilities in fact, while CPU performance will only increase a staggering 3x in the same timeframe.

According to Huang, who made his revelation at the Hot Chips symposium in Stanford University, the advancement would open the door to advanced forms of augmented reality and the development of real-time universal language translation devices. Wait? A universal translator? Sounds like Huang is talking Star Trek!

Huang also said that such advancements in GPU computation would also boost a number of applications such as interactive ray tracing, CGI simulations, energy exploration, and other "real-world" applications.

Display 101 Comments.
This thread is closed for comments
Top Comments
  • 21 Hide
    intesx81 , August 26, 2009 11:34 PM
    Here's my 'translation' of this article: The amount of GPU computing (using the GPU for non-graphic work) with increase by 570x. Considering the nearly non-existant uses for GPU based computing today, its definitely conceivable that there could be 570x more uses for GPU based computing. This just seems like whoever picked up the story took the figure out of context. Its technically true but not in the way we're reading it.
  • 20 Hide
    leafblower29 , August 26, 2009 10:43 PM
    Damn that's a lot.
  • 20 Hide
    maximus559 , August 26, 2009 10:58 PM
    Now that I'd like to see :)  Really though, that's a bit outlandish. NVIDIA had better have something big up its sleeve to back up claims like that, because in my experience, nothing in this industry advances in leaps like that.
Other Comments
  • 20 Hide
    leafblower29 , August 26, 2009 10:43 PM
    Damn that's a lot.
  • 19 Hide
    radnor , August 26, 2009 10:46 PM
    I don't know why, but i don't doubt that
  • -3 Hide
    ubernoobie , August 26, 2009 10:46 PM
    that bugatti veyron looks awefully realistic
  • 17 Hide
    tipoo , August 26, 2009 10:47 PM
    ubernoobiethat bugatti veyron looks awefully realistic

    To me it looks like something you would see in NFS, nothing impressive.
  • 20 Hide
    maximus559 , August 26, 2009 10:58 PM
    Now that I'd like to see :)  Really though, that's a bit outlandish. NVIDIA had better have something big up its sleeve to back up claims like that, because in my experience, nothing in this industry advances in leaps like that.
  • 18 Hide
    Gin Fushicho , August 26, 2009 10:58 PM
    I just want to hear about they're next GPU already.
  • 11 Hide
    Blessedman , August 26, 2009 11:01 PM
    if they continue their recent course of a new chip ever other year (instead of every 6months back in the day), 570x looks like a typo...
  • 14 Hide
    Anonymous , August 26, 2009 11:10 PM
    Yeah right... Nvidia has some magical breakthrough up their sleeves, 570x the performance in the same power envelope, since there's no way they can dissipate anymore heat than they already are...

    Intel already tried and failed at this kind of breakthrough, they whipped up a frenzy with their sham terascale demonstration, then later they realized that Larrabee was going to suck, then they quietly watered down expectations, and it wouldn't even surprise me now if they quietly cancelled it's release altogether.
  • 4 Hide
    lejay , August 26, 2009 11:22 PM
    Yeah... And the zune hd has 25 days of music playback time, right?
  • 21 Hide
    intesx81 , August 26, 2009 11:34 PM
    Here's my 'translation' of this article: The amount of GPU computing (using the GPU for non-graphic work) with increase by 570x. Considering the nearly non-existant uses for GPU based computing today, its definitely conceivable that there could be 570x more uses for GPU based computing. This just seems like whoever picked up the story took the figure out of context. Its technically true but not in the way we're reading it.
  • 15 Hide
    MrCommunistGen , August 26, 2009 11:52 PM
    "a mere 570 times that of today's capabilities in fact, while CPU performance will only increase a staggering 3x in the same timeframe"

    I think someone got "mere" and "staggering" backwards...

    On topic: I think that this sounds more like marketing hyperbole. I agree with intesx81's assessment.

    -mcg
  • 13 Hide
    pakardbell486dx2 , August 26, 2009 11:55 PM
    will it run Crysis?...............Better?
  • 1 Hide
    Spanky Deluxe , August 26, 2009 11:58 PM
    570x might not be that unrealistic. NVidia already have a roadmap and probably some rough designs for stuff that they'll be releasing in 2015. The thing is, they've kind of got an easier job than CPU manufacturers. Purely because they only deal in highly parralelizable code. When Intel/AMD release a new CPU they can't just double the number of cores, they have to make those individual cores faster than before because a lot of tasks simply can't be parallelized well. GPU makers don't have these problems. They can literally double the number of number crunching units without making those units any faster and end up with a GPU that can do twice the number crunching work.

    Think of it this way, right now nVidia's top spec GPU is the G200b found in the GTX 285. Its made using a 55nm fabrication process. Current expectations are that 22nm fabrication will take over in about 2011-2012 with 16nm coming in about 2018 at the latest. Using a 22nm process, the same size die as in the G200b could fit 8 copies of the G200b in it. 16nm would give 16. You could say that GPU makers get that kind of scaling 'for free'. Its then up to them to come up with advances in design etc. A 16nm process would also run a fair bit faster anyway without the use of extra cores.

    16nm die fab = 16x as many processing units + faster processing units

    Put together 16nm die fab + 6 years of refinements and R&D + larger dies if necessary = 570x increase not that far fetched a claim.
  • 5 Hide
    Anonymous , August 27, 2009 12:15 AM
    Spanky_Deluxe: That is some of the most epic fail math ever. 22nm will accomodate 2.5x as many transistors as 55nm, not 8x. Currently they don't believe that they'll get past 22nm due to quantum effects, so 16nm is a moot point until they tell us otherwise. Even if your horrible math was right, how do you explain the 570x increase with your theoretical 16x the transistors per die size? Quadruple the die size(not realistic, and yields would be horrible) to get 64x, then you still need to somehow wring 9x the performance per transistor of an already mature tech. When pigs fly.
  • 2 Hide
    xaira , August 27, 2009 12:25 AM
    that vevron looks like dx11, this was ray traced\
    http://www.lightandmatter.com/html_books/5op/ch01/figs/computer-ray-tracing.jpg
  • 0 Hide
    dman3k , August 27, 2009 12:26 AM
    That guy sure knows how to sell a lot of propaganda. I look for the nVidia stocks to soar tomorrow.
  • 0 Hide
    deepgray , August 27, 2009 12:29 AM
    MrCommunistGen"a mere 570 times that of today's capabilities in fact, while CPU performance will only increase a staggering 3x in the same timeframe"I think someone got "mere" and "staggering" backwards...On topic: I think that this sounds more like marketing hyperbole. I agree with intesx81's assessment.-mcg


    I think it was sarcasm. But you know how that doesn't always translate well through text.
  • 16 Hide
    Anonymous , August 27, 2009 12:35 AM
    I suspect they meant 570%, not 570x. 570% is only 5.7x, which is much more realistic.
  • 5 Hide
    zerapio , August 27, 2009 12:48 AM
    That's a 2.88x increase every year for 6 years. I don't buy it.
  • 3 Hide
    NocturnalOne , August 27, 2009 12:52 AM
    schufferI suspect they meant 570%, not 570x. 570% is only 5.7x, which is much more realistic.


    While I doubt the 570x prediction myself it is actually what he said. Or at least what his slide said. Check the pictures.

    http://blogs.nvidia.com/nTersect/
Display more comments