During ECGC 2011, Nvidia senior vice president of content and technology Tony Tamasi (opens in new tab) made a startling prediction during his keynote presentation called "The Future of Graphics Processing." He claimed that GPU performance will increase 1000-percent by 2015, allowing graphics cards to generate real-time ray tracing and procedurally generated smoke at 30 to 60 frames per second.
To put this into perspective, Nvidia's latest GPU can churn out the same photo-realistic graphics at 2 frames per second. Obviously that's not practical for gamers at this point. But for digital artists, product and automobile designers, this is a virtual holy grail. Gone are the days of making simple changes and then having to wait an hour or two for the image to be redrawn. Instead, it could take mere second depending on the artwok's complexity. But in a FPS environment, one or two frames per second isn't even worth a glance.
To back up his claim, Tamasi presented a timeline on how the GPU has progressed since the days of GLQuake using screenshots of several games (Quake 2, Call of Duty, Battlefield 3) to represent stages in the evolution. At the same time, he also detailed hardware features that have been added along the way including transform and lighting, programmable shading and so on.
But he also threw up a chart on the big screen that listed GPU specs dating 2007, 2011 and 2015. In 2007, GPUs featured a texture performance of 12.3 giga-transfers per second (GT/s), an antialiasing performance of 10.3 giga-samples per second (GS/s), a memory bandwidth of 63.4 gigabytes per second (GB/s), geometry running at 0.3 triangles per second (Gtri/s) and a floating point of 228 giga-flops (Gflop/s). In 2011, Nvidia's latest GPU features a texture performance of 84.5 GT/s, antialiasing performance of 37.0 GS/s, a memory bandwidth of 192.4 GB/s, geometry running at 3.1 Gtri/s and a floating point of 2703 Gflop/s.
Now here's the kicker. Based on the compound annual growth rate between 2007 and 2011 (1.94, 1.56, 1.47, 2.34 and 2.35 respectively), Nvidia predicts that a 2015 GPU will feature a texture performance of 579.7 GT/s, antialiasing performance of 133.8 GS/s, and a memory bandwidth of 584.1 GB/s. Geometry will be at a staggering 37.2 Gtri/s and the floating point will be up to 32039.8 Gflop/s.
Given that a large portion of the audience probably owned Microsoft's Xbox 360 gaming console (guilty as charged), he didn't leave them out of the picture. The console, which launched in 2005, features a GPU with a texture performance of 8 GT/s, an antialiasing performance of 16 GS/s, a memory bandwidth of 22.4 GB/s, geometry running at 0.3 Gtri/s and a floating point of 240 Gflop/s. When compared with a 2007 PC GPU, the console outperforms in antialiasing and floating point. But as Tamasi noted, the geometry numbers remained flatlined on both PC and Xbox 360 for years.
Looking over the charts and hearing Tamasi's prediction of real-time ray tracing at acceptable, nearly fluid levels in 2015, you have to wonder: is this the end of the road? Is it even possible to push graphics beyond photo-realism? When will the GPU run out of gas? When will performance taper off? Tamasi says he's asked that quite a lot.
"I don't know when it's going to be done," he admitted to the audience. "Which from my perspective, that's a good thing. And probably for all of us too because as soon as people see it as 'done,' then innovation starts to change. And it starts to go from being an innovation-driven industry to basically a lowest-common-denominator kind of cost-driven economy. There's innovation there but a different kind of innovation."
Later on after the keynote, I wanted to take this topic a little further. Seeing the visual difference between Quake 2 and Battlefield 3 made me think of Jack Thompson and the term he seemingly likes to throw around, killer simulator. Quake 2 and Call of Duty look like games-- they attempt to imitate reality (well mostly CoD) but there's a clear difference. Battlefield 3 images borderline on realistic. With developers pushing for realism and Nvidia pushing technology to provide realism, when does it go too far? When do games cross the line from being a simple game for entertainment, to a real-life simulator?
He agreed that you can definitely overdo it. You can have a movie that's focused on special effects and no story, and it's a crappy movie. If you have a game with all graphics and no game, you have no gameplay. You can definitely have a great game that doesn't sport stunning eye candy. But certain genres-- FPS primarily-- have come to require top-notch, bleeding-edge visuals-- fans have simply come to expect high-end visuals in that particular genre.
Later on Mark Cerny spoke of a vicious cycle in the following keynote, where consumers demand more, developers and hardware manufacturers produce more, requiring larger budgets, and then consumer demand require even more on top of that. On the FPS front at least, the genre has mostly matured from a gaming aspect to a simulator aspect, and doesn't appear to have any kind of "end" in sight.
And just as we entered a golden era by moving from pixels to polygons (and thus adding native OpenGL support), the mobile front is now entering a similar, exciting era. "[Mobile] gameplay innovation has been re-invigorated," he told me after the keynote, saying we'll essentially ride that new wave out until there's an overall standard, and then the graphics front should escalate dramatically. We're already experiencing a steep escalation now as it is thanks to a dramatic increase in mobile hardware performance.
Given the rapid advancement in mobile (smartphone, tablet) technology, will these devices actually replace netbooks in the near future? Netbooks will be wedged out, he said, but not notebooks because it's a form factor most consumers are familiar with. It has a larger display and an integrated keyboard. "There's a place in the universe for that form factor," he said.
While that indeed may be true, I saw a large number of tablets throughout the convention, seen both within the sessions and the keynotes. Although there were notebook users present at the show (this one included), tablets by far outweighed the older form factor. But as he said, there's a place in the universe for notebooks just as there's a place in the universe for a 1000-watt PC playing host to three Nvidia GeForce GTX 580 cards in SLI.
Getting back to the topic at hand, a large chunk of Tamasi's presentation focused on the road Nvidia has taken since the days of GLQuake, to where the GPUs stand now in terms of what they can crank out on a visual level. As mentioned in another article, Nvidia and Epic presented the DirectX 11-drenched "Samaritan" video in real-time behind closed doors, showcasing the current state of GPU technology in a 3-way SLI configuration. A video version was also shown during Tamasi's keynote even though the monster rig used in the private demo sat at his feet on stage like a dark, ferocious beast poised and ready for attack. I didn't see any Scooby snacks, either.
On a side note, he openly admitted that he was thrilled many people in the industry believed the demo to be pre-rendered like most cinematics. But it's not. It runs in real-time and he believes Nvidia has reached a milestone to where "many people's perception of what's possible in real time has been completely changed," that what can be accomplished today was achieved as an offline, pre-rendered video five to six years ago.
The next half of his keynote focused on mobile, claiming that the latest generation resides in the DirectX 9 class. But he then offered an interesting view of mobile's future: take all the "amazing" technological advances primarily manifested on the PC (as it tends to advance a level every year), the content developed for the consoles (because, let's face it, we're in the Era of Consoles whether you like it or not), and cram it all into a mobile form factor you can take with you wherever you go (as in stick it in your pocket). That is apparently Nvidia's vision of Tegra.
The next segment regarding Tegra's roadmap was more of a rehash of what we already know, and he whipped out the familiar Tegra slide listing upcoming SOC's named after heroes from the DC and Marvel comics: Kal-El in 2011 (5x faster than the current Tegra 2), Wayne in 2012 (10x), Logan in 2013 (50x) and Stark (75x). As seen on the slide, the CPU aspect of Kal-El outperforms the Intel Core 2 Duo T7200 processor and is 4x away from the current generation of gaming consoles... in a mobile form factor. That said, new mobile devices should pass current-gen consoles in computing performance within the next few years.
Tamasi said that things didn't really get interesting on the mobile front until programmable pixel shading was introduced. He also said that the first iPhone was "truly revolutionary" and completely turned the mobile smartphone industry around-- absolutely a fantastic product that brought real computing to a mobile platform in a truly useful way. It also caused everyone else outside Apple to completely re-think their mobile strategy. "No doubt about it, they'll all tell you the same thing," he said.
At the end of his keynote, Tamasi played Blizzard's awesome cinematic for World of Warcraft: Cataclysm, saying that some aspects will be possible to render in real-time within the next four to five years: the fire, level of geometric complexity, a lot of the smoke simulations, and more. He told the audience to look back at the original Call of Duty and then play the "Samaritan" video-- you'll then see it's not all that impossible to imagine real-time ray tracing and whatnot within the next five years.
After the show, Tamasi said something interesting that made me realize there will probably never be an "end" as far as pushing the graphics boundary or pumping out the next level of hardware: you can't develop the next generation of gaming content on a 1-watt phone. As long as gamers demand more, software and hardware will supply the goods.