There is usually some truth behind every rumor. For years, there were rumors of Apple’s Marklar division, a team dedicated to porting OS X from PowerPC to x86. Though this seemed crazy during the Intel Pentium 4 era, this was a rumor that never died. Turns out, it wasn’t a rumor.
Along those same lines, I don’t believe it is simply an unsubstantiated rumor that Nvidia is making an x86 CPU. This has come up too often, and the company's recruiting of x86 validation engineers and licensing of "other Transmeta technologies" besides LongRun hint at a bigger picture.
Without having any inside information, there are two areas where the x86 investment could prove worthwhile. First, I’ll tell you what it’s not. It’s not a high-end CPU. Although AMD and Intel are both working on integrating CPUs and GPUs on the same die, neither company will be able to integrate flagship performance graphics on the same die. This is because the increase in die size from incorporating a high-end GPU and CPU will result in exponentially greater manufacturing costs, and the challenge of thermal management for such a large chip will be another engineering challenge (Ed.: just look at the time Nvidia is already having with GF100, and that's a GPU-only).
The first option is that Nvidia is continuing development of the Transmeta Crusoe CPU. Though the Crusoe was not a commercial success, on a performance per watt level, the CPU was highly competitive against even today’s Intel Atom. A newer version of the VLIW architecture of the Crusoe, augmented by improvements in manufacturing technology and the code-morphing algorithms could be a competitive low-power device. When combined with an embedded GPU, Nvidia would have a product that competes against AMD Fusion and Intel embedded products. This could be a desktop version of Tegra.
The second option, which is more likely, is that Nvidia will incorporate a simple CPU on future versions of the Tesla or Quadro. Currently, one of the most computationally-inefficient portions of GPGPU is transferring data back and forth between the graphics card and the rest of the system. By incorporating a true general purpose CPU on the graphics card itself, "housekeeping tasks" can be performed on the GPU with local graphics memory, thereby improving performance. It could be an intermediate to better manage asynchronous data transfers from the GPU to this mini CPU. This device would not need to run x86; it could apply code morphing to work with Nvidia PTX instructions or have some efficient combination that makes it worthwhile.
Hardware REYES Acceleration?
Remember all that talk about Pixar-class graphics? Pixar’s films are rendered using Renderman, a software implementation of the REYES architecture. In traditional 3D graphics, large triangles are sorted, drawn, shaded, lit, and then textured. REYES divides curves into micropolygons that are smaller than a pixel in size, along with stochastic sampling to prevent aliasing. It’s a different way of rendering. At SIGGRAPH 2009, a GPU implementation of a REYES render was demonstrated using a GeForce GTX 280. Though more work will need to be done, Nvidia appears to be headed in this direction with Bill Dally in the position of VP of research. I’d be surprised if we didn’t see an Nvidia implementation of REYES in the future.
In fact, Nvidia already has an investment in Hollywood. Late last year, it announced iRay, hardware-accelerated ray tracing for use with the mental ray suite. Mental ray is a global illumination/ray tracing engine that competes against Renderman/REYES, and has been used by feature films such as Spiderman 3, Speed Racer, and The Day After Tomorrow. Oh, and Mental Images is a wholly owned subsidiary of Nvidia.
Nvidia’s corporate philosophy and track record is consistent with the goal of providing hardware-accelerated graphics to consumers, hardware-accelerated rendering to Hollywood, and throughput computing to the scientific community. The hardware and software expertise required to produce this is available within Nvidia’s walls. Whereas AMD has the track record with CPU and GPU hardware, and Intel has the deepest pockets, Nvidia has built the strongest portfolio of software technology. Software is what made the iPod. Software is what made the iPhone. Nvidia’s vision is coherent, but the company’s success requires timely execution of both its hardware and software milestones (Ed.: notable, then, that this is currently an issue for the company).
The next few years will be an exciting time for computing. We have a bona fide three-horse race with AMD, Intel, and Nvidia. Perhaps more important, each company has non-overlapping talents and a unique approach toward success. The next generation of products will not simply be "me too" launches, but instead reflect a world of new ideas and paradigms. These technologies will enable new areas of entertainment, science, and creativity. And games will look pretty sweet, too. At least, that’s the way I see it.