Ray Tracing And AI: Betting It All on Black
Nvidia could have poured all of its efforts into features that make Turing faster than the Pascal architecture in today’s games, rather than splitting attention between immediate speed-ups and investments in the future. But the company saw an opportunity to drive innovation and took a calculated risk. And why not? Its performance and efficiency lead over AMD’s Vega RX series creates a comfortable margin to push next-gen technology more than we’ve seen in years.
Dedicating gobs of transistors to Tensor cores and RT cores doesn’t pay off today. It doesn’t pay off tomorrow. Rather, it pays off months or even years down the road as developers become comfortable optimizing their games for a combination of real-time ray tracing and rasterization. Nvidia needs them to spend time experimenting with neural graphics and AI. Technologies like mesh shading and variable rate shading aren’t used anywhere currently; they require adoption and support. With all of that said, developer enthusiasm for real-time ray tracing appears unanimous. This truly seems like a vision everyone wants to see realized.
In the meantime, gamers aren’t going to spend their hard-earned dollars on a future promise of better-looking visuals. They want instant gratification in the form of higher frame rates. We can talk architecture ad nauseam. However, the features found in today’s discussion only amount to half of the story (the academic half, no less). In a few days, we’ll have a battery of performance, power, and acoustic results to present. Only then can we tell you if GeForce RTX 2080 and 2080 Ti warrant the high prices that have enthusiasts gnashing their teeth.
MORE: Best Graphics Cards
MORE: Desktop GPU Performance Hierarchy Table
MORE: All Graphics Content