Display Outputs and the Video Controller
Display Outputs: A Controller For The Future
The GeForce RTX Founders Edition cards sport three DisplayPort 1.4a connectors, one HDMI 2.0b output, and one VirtualLink interface. They support up to four monitors simultaneously, and naturally are HDCP 2.2-compatible. Why no HDMI 2.1? That standard was released in November of 2017, long after Turing was finalized.
Turing also enables Display Stream Compression (DSC) over DisplayPort, making it possible to drive an 8K (7680x4320) display over a single stream at 60 Hz. GP102 lacked this functionality. DSC is also the key to running at 4K (3840x2160) with a 120 Hz refresh and HDR.
Speaking of HDR, Turing now natively reads in HDR content through dedicated tone mapping hardware. Pascal, on the other hand, needed to apply processing that added latency.
Finally, all three Founders Edition cards include VirtualLink connectors for next-generation VR HMDs. The VirtualLink interface utilizes a USB Type-C connector but is based on an Alternate Mode with reconfigured pins to deliver four DisplayPort lanes, a bi-directional USB 3.1 Gen2 data channel for high-res sensors, and up to 27W of power. According to the VirtualLink Consortium, existing headsets typically operate within a 15W envelope, including displays, controllers, audio, power loss over a 5m cable, cable electronics, and connector losses. But this new interface is designed to support higher-power devices with improved display capabilities, better audio, higher-end cameras, and accessory ports as well. Just be aware that VirtualLink’s power delivery is not reflected in the TDP specification of GeForce RTX cards; Nvidia says using the interface requires up to an additional 35W.
Video Acceleration: Encode And Decode Improvements
Hardware-accelerated video features don’t get as much as attention as gaming, but new graphics architectures typically do add improvements that support the latest compression standards, incorporate more advanced coding tools/profiles, and offload work from the CPU.
Encode performance is more important than ever for gamers streaming content to social platforms. A GPU able to handle the workload in hardware alleviates other platform resources, so the encode has a smaller impact on game frame rates. Historically, GPU-accelerated encoding didn’t quite match the quality of a software encode at a given bit rate, though. Nvidia claims Turing changes this by absorbing the workload and exceeding the quality of a software-based x264 Fast encode (according to its own peak signal-to-noise ratio benchmark). Beyond a certain point, quality improvements do offer diminishing returns. It’s notable, then, that Nvidia claims to at least match the Fast profile. But streamers are most interested in the GPU’s ability to minimize CPU utilization and bit rate.
This generation, the NVEnc encoder is fast enough for real-time 8K HDR at 30 FPS. Optimizations to the encoder facilitate bit rate savings of up to 25% in HEVC (or a corresponding quality increase at the same bit rate) and up to 15% in H.264. Hardware acceleration makes real-time encoding at 4K a viable option as well, although Nvidia doesn’t specify what CPU it tested against to generate a 73% utilization figure in its software-only comparison.
Additionally, the decode block supports VP9 10/12-bit HDR. It’s not clear how that differs from GP102-based cards though, since GeForce GTX 1080 Ti is clearly listed in Nvidia’s NVDec support matrix as having VP9 10/12-bit support as well. Similarly, HEVC 4:4:4 10/12-bit HDR is listed as a new feature for Turing.
MORE: Best Graphics Cards
MORE: Desktop GPU Performance Hierarchy Table
MORE: All Graphics Content