Sign in with
Sign up | Sign in

Nvidia As x86 Manufacturer

Opinion: AMD, Intel, And Nvidia In The Next Ten Years
By

There is usually some truth behind every rumor. For years, there were rumors of Apple’s Marklar division, a team dedicated to porting OS X from PowerPC to x86. Though this seemed crazy during the Intel Pentium 4 era, this was a rumor that never died. Turns out, it wasn’t a rumor.

Along those same lines, I don’t believe it is simply an unsubstantiated rumor that Nvidia is making an x86 CPU. This has come up too often, and the company's recruiting of x86 validation engineers and licensing of "other Transmeta technologies" besides LongRun hint at a bigger picture.

Without having any inside information, there are two areas where the x86 investment could prove worthwhile. First, I’ll tell you what it’s not. It’s not a high-end CPU. Although AMD and Intel are both working on integrating CPUs and GPUs on the same die, neither company will be able to integrate flagship performance graphics on the same die. This is because the increase in die size from incorporating a high-end GPU and CPU will result in exponentially greater manufacturing costs, and the challenge of thermal management for such a large chip will be another engineering challenge (Ed.: just look at the time Nvidia is already having with GF100, and that's a GPU-only).

The first option is that Nvidia is continuing development of the Transmeta Crusoe CPU. Though the Crusoe was not a commercial success, on a performance per watt level, the CPU was highly competitive against even today’s Intel Atom. A newer version of the VLIW architecture of the Crusoe, augmented by improvements in manufacturing technology and the code-morphing algorithms could be a competitive low-power device. When combined with an embedded GPU, Nvidia would have a product that competes against AMD Fusion and Intel embedded products. This could be a desktop version of Tegra.

The second option, which is more likely, is that Nvidia will incorporate a simple CPU on future versions of the Tesla or Quadro. Currently, one of the most computationally-inefficient portions of GPGPU is transferring data back and forth between the graphics card and the rest of the system. By incorporating a true general purpose CPU on the graphics card itself, "housekeeping tasks" can be performed on the GPU with local graphics memory, thereby improving performance. It could be an intermediate to better manage asynchronous data transfers from the GPU to this mini CPU. This device would not need to run x86; it could apply code morphing to work with Nvidia PTX instructions or have some efficient combination that makes it worthwhile.

Hardware REYES Acceleration?

Remember all that talk about Pixar-class graphics? Pixar’s films are rendered using Renderman, a software implementation of the REYES architecture. In traditional 3D graphics, large triangles are sorted, drawn, shaded, lit, and then textured. REYES divides curves into micropolygons that are smaller than a pixel in size, along with stochastic sampling to prevent aliasing. It’s a different way of rendering. At SIGGRAPH 2009, a GPU implementation of a REYES render was demonstrated using a GeForce GTX 280. Though more work will need to be done, Nvidia appears to be headed in this direction with Bill Dally in the position of VP of research. I’d be surprised if we didn’t see an Nvidia implementation of REYES in the future.

In fact, Nvidia already has an investment in Hollywood. Late last year, it announced iRay, hardware-accelerated ray tracing for use with the mental ray suite. Mental ray is a global illumination/ray tracing engine that competes against Renderman/REYES, and has been used by feature films such as Spiderman 3, Speed Racer, and The Day After Tomorrow. Oh, and Mental Images is a wholly owned subsidiary of Nvidia.

Nvidia’s Outlook

Nvidia’s corporate philosophy and track record is consistent with the goal of providing hardware-accelerated graphics to consumers, hardware-accelerated rendering to Hollywood, and throughput computing to the scientific community. The hardware and software expertise required to produce this is available within Nvidia’s walls. Whereas AMD has the track record with CPU and GPU hardware, and Intel has the deepest pockets, Nvidia has built the strongest portfolio of software technology. Software is what made the iPod. Software is what made the iPhone. Nvidia’s vision is coherent, but the company’s success requires timely execution of both its hardware and software milestones (Ed.: notable, then, that this is currently an issue for the company).

Conclusion

The next few years will be an exciting time for computing. We have a bona fide three-horse race with AMD, Intel, and Nvidia. Perhaps more important, each company has non-overlapping talents and a unique approach toward success. The next generation of products will not simply be "me too" launches, but instead reflect a world of new ideas and paradigms. These technologies will enable new areas of entertainment, science, and creativity. And games will look pretty sweet, too. At least, that’s the way I see it.

Ask a Category Expert

Create a new thread in the Reviews comments forum about this subject

Example: Notebook, Android, SSD hard drive

Display all 102 comments.
This thread is closed for comments
Top Comments
  • 33 Hide
    jontseng , March 1, 2010 8:37 AM
    This means that we haven’t reached the plateau in "subjective experience" either. Newer and more powerful GPUs will continue to be produced as software titles with more complex graphics are created. Only when this plateau is reached will sales of dedicated graphics chips begin to decline.

    I'm surprised that you've completely missed the console factor.

    The reason why devs are not coding newer and more powerful games is nothing to do with budgetary constraints or lack thereof. It is because they are coding for an XBox360 / PS3 baseline hardware spec that is stuck somewhere in the GeForce 7800 era. Remember only 13% of COD:MW2 units were PC (and probably less as a % sales given PC ASPs are lower).

    So your logic is flawed, or rather you have the wrong end of the stick. Because software titles with more complex graphics are not being created (because of the console baseline), newer and more powerful GPUs will not continue to produced.

    Or to put it in more practical terms, because the most graphically demanding title you can possibly get is now three years old (Crysis), then NVidia has been happy to churn out G92 respins based on a 2006 spec.

    Until we next generation of consoles comes through there is zero commercial incentive for a developer to build a AAA title which exploits the 13% of the market that has PCs (or the even smaller bit of that has a modern graphics card). Which means you don't get phat new GPUs, QED.

    And the problem is the console cycle seems to be elongating...

    J
  • 31 Hide
    False_Dmitry_II , March 1, 2010 6:30 AM
    I want to read this again in 10 years just to see the results...
  • 22 Hide
    anamaniac , March 1, 2010 6:21 AM
    Alan DangAnd games will look pretty sweet, too. At least, that’s the way I see it.

    After several pages of technology mumbo jumbo jargon, that was a perfect closing statement. =)

    Wicked article Alan. Sounds like you've had an interesting last decade indeed.
    I'm hoping we all get to see another decade of constant change and improvement to technology as we know it.

    Also interesting is that you almost seemed to be attacking every company, you still managed to remain neutral.
    Everyone has benefits and flaws, nice to see you mentioned them both for everybody.

    Here's to another 10 years of success everyone!
Other Comments
  • 22 Hide
    anamaniac , March 1, 2010 6:21 AM
    Alan DangAnd games will look pretty sweet, too. At least, that’s the way I see it.

    After several pages of technology mumbo jumbo jargon, that was a perfect closing statement. =)

    Wicked article Alan. Sounds like you've had an interesting last decade indeed.
    I'm hoping we all get to see another decade of constant change and improvement to technology as we know it.

    Also interesting is that you almost seemed to be attacking every company, you still managed to remain neutral.
    Everyone has benefits and flaws, nice to see you mentioned them both for everybody.

    Here's to another 10 years of success everyone!
  • 31 Hide
    False_Dmitry_II , March 1, 2010 6:30 AM
    I want to read this again in 10 years just to see the results...
  • 1 Hide
    Anonymous , March 1, 2010 6:37 AM
    " Simply put, software development has not been moving as fast as hardware growth. While hardware manufacturers have to make faster and faster products to stay in business, software developers have to sell more and more games"

    Hardware is moving so fast and game developers just cant keep pace with it.
  • -2 Hide
    Ikke_Niels , March 1, 2010 7:55 AM
    What I miss in the article is the following (well it's partly told):

    I am allready suspecting a long time that the videocards are gonna surpass the CPU's.
    You allready see it atm, videocards get cheaper, CPU's on the other hand keep going pricer for the relative performance.

    In the past I had the problem with upgrading my videocard, but with that pushing my CPU to the limit and thus not using the full potential of the videocard.

    In my view we're on that point again: you buy a system and if you upgrade your videocard after a year/year-and-a-half your mostlikely pushing your CPU to the limits, at least in the high-end part of the market.

    Ofcourse in the lower regions these problems are smaller but still, it "might" happen sooner then we think especially if the NVidia design is as astonishing as they say and on the same time the major development of cpu's slowly break up.


  • 17 Hide
    sarsoft , March 1, 2010 8:01 AM
    Nice article. Good read....
  • 7 Hide
    lashton , March 1, 2010 8:25 AM
    one of the most interesting and informativfe articles from toms hardware, what about another story about the smaller players, like Intel Atom and VILW chips and so on
  • 6 Hide
    JeanLuc , March 1, 2010 8:32 AM
    Out of all 3 companies Nvidia is the one that's facing the more threats. It may have a lead in the GPGPU arena but that's rather a niche market compared to consumer entertainment wouldn't you say? Nvidia are also facing problems at the low end of market with Intel now supplying integrated video on their CPU's which makes the need for low end video cards practically redundant and no doubt AMD will be supplying a smiler product with Fusion at some point in the near future.
  • 33 Hide
    jontseng , March 1, 2010 8:37 AM
    This means that we haven’t reached the plateau in "subjective experience" either. Newer and more powerful GPUs will continue to be produced as software titles with more complex graphics are created. Only when this plateau is reached will sales of dedicated graphics chips begin to decline.

    I'm surprised that you've completely missed the console factor.

    The reason why devs are not coding newer and more powerful games is nothing to do with budgetary constraints or lack thereof. It is because they are coding for an XBox360 / PS3 baseline hardware spec that is stuck somewhere in the GeForce 7800 era. Remember only 13% of COD:MW2 units were PC (and probably less as a % sales given PC ASPs are lower).

    So your logic is flawed, or rather you have the wrong end of the stick. Because software titles with more complex graphics are not being created (because of the console baseline), newer and more powerful GPUs will not continue to produced.

    Or to put it in more practical terms, because the most graphically demanding title you can possibly get is now three years old (Crysis), then NVidia has been happy to churn out G92 respins based on a 2006 spec.

    Until we next generation of consoles comes through there is zero commercial incentive for a developer to build a AAA title which exploits the 13% of the market that has PCs (or the even smaller bit of that has a modern graphics card). Which means you don't get phat new GPUs, QED.

    And the problem is the console cycle seems to be elongating...

    J
  • 11 Hide
    Swindez95 , March 1, 2010 8:59 AM
    I agree with jontseng above ^. I've already made a point of this a couple of times. We will not see an increase in graphics intensity until the next generation of consoles come out simply because consoles is where the majority of games sales are. And as stated above developers are simply coding games and graphics for use on much older and less powerful hardware than the PC has available to it currently due to these last generation consoles still being the most popular venue for consumers.
  • 5 Hide
    Swindez95 , March 1, 2010 9:01 AM
    Oh, and very good article btw, definitely enjoyed reading it!
  • 2 Hide
    1898 , March 1, 2010 9:14 AM
    Without much doubt, Nvidia is working on a x86 CPU simply because their life depends on it.
    and +1 jontseng
  • -5 Hide
    mfarrukh , March 1, 2010 9:15 AM
    I hope all of this is for the betterment of mankind
  • 0 Hide
    neiroatopelcc , March 1, 2010 9:46 AM
    Page 4 sais 5-6 hours to render a frame? can't be true really ....
    a typical animation feature : 90 minutes
    the maths : 90 minutes = 5400 seconds ; @ 25fps that is a total of 135000 frames to render ; rendition time is 5 times that, totalling 675000 hours = 28125 days - that's 77 years - even in parallel that means it'll take a year with 77 supercomputers to do just one animation of each frame ; and I know a disney guy (dvd extra content) said it took about 8 months to make an animated feature after the story was done (ie. animation) - doubt pixar is so much slower.
    ps. assuming 29.7fps it's over 90 years
  • 12 Hide
    Tohos , March 1, 2010 10:10 AM
    @ neiroatopelcc
    That is where render farms come in. Hundreds of computer clusters churning out frames day and night.
  • 3 Hide
    mfarrukh , March 1, 2010 10:25 AM
    Excellent read. Thorough and great Experience speaking
  • 4 Hide
    climber , March 1, 2010 10:29 AM
    If you want to get a feel for how long it takes to render a frame in a modern movie. Check out the extended features content on the first Transformers movie. It's mentioned that it takes ~24 hrs to render a frame of movie footage with five transformers in it at full screen size. I might have some of the numbers slightly off, but it's serious computational time folks.
  • -5 Hide
    neiroatopelcc , March 1, 2010 10:36 AM
    Tohos@ neiroatopelcc That is where render farms come in. Hundreds of computer clusters churning out frames day and night.

    What follows is a citation from the actual article page four.
    "With Pixar-level budgets come the potential for Pixar-level graphics (and Pixar-level characters and stories). Given that Pixar films still require 5 to 6 hours to render a single frame on large supercomputer clusters, the answer is no, graphics have not reached the point of diminishing returns yet."
    In my book a cluster of supercomputers is the same as a render farm.
    climberIf you want to get a feel for how long it takes to render a frame in a modern movie. Check out the extended features content on the first Transformers movie. It's mentioned that it takes ~24 hrs to render a frame of movie footage with five transformers in it at full screen size. I might have some of the numbers slightly off, but it's serious computational time folks.

    I don't own that movie, and I'm not even sure I've seen it. Is it the one with amodern yellow muscle car and a wimpy teen? if so I may have.
    Anyway - a day per frame simply can't be an average - not even 5 hours can! It would simply take too many years to make a movie, and I'm sure pixar doesn't rent all blue gene servers in the world or half of crays hardware just to make one movie?
  • 7 Hide
    Anonymous , March 1, 2010 10:47 AM
    That's what the writer was getting at. These 'supercomputers' are nothing but render farms. And while they don't take hours to render a frame, they do take a significant amount of time. Then again one must consider that these frames are 4096×2160 each and you are not only doing raster operations, including high levels of AA, but also carrying out physics calculations on almost all of contents of the scene. This is waaaay more than any gpu can hope to do right now. That is why you are seeing these scenes rendered on farms consisting of general purpose cpus. They can compute anything required for the rendering of the scene. They are easy to assign tasks to over the network with existing software. I doubt scheduling rendering chores to cpus AND gpus on each of the nodes on the farm work as well with current hardware and software.
  • 0 Hide
    killerclick , March 1, 2010 10:50 AM
    I'm happy I get to keep my 8800GT for another year. :) 
  • 4 Hide
    memeroot , March 1, 2010 10:57 AM
    one of the great things with render farms is that they can render more than one image at a time, one of the sad things about render farms is if you allocated the whole farm over to 1 image it wouldn't be any faster.
Display more comments