Skip to main content

Microsoft Explains GPU Hardware Scheduling: Aims to Improve Input Lag

(Image credit: Shutterstock)

If you've been following tech news lately, chances are you've heard that hardware-accelerated GPU scheduling has become a thing. Microsoft baked it into the May 2020 update, Nvidia implemented it, AMD did too, and we've tested it on Nvidia's GPUs. However, so far there has been very little explanation about what it actually does, and why it's relevant. Our own testing on Nvidia graphics cards yielded mixed results, making us wonder whether it's really worth your time.

Today, Microsoft posted a blog to answer these questions, so naturally, we're here to tell you the what and how in (mostly) our own words.

Primary Improvement: Input Latency

Skipping the history lesson of how Microsoft's WDDM GPU scheduler came to be, in games today it functions as the piece of software that assigns tasks to the GPU. This scheduler runs on the CPU as a high-priority thread that "coordinates, prioritizes, and schedules the work submitted by various applications." Why high priority? Because you want the GPU to receive its jobs as soon as you trigger your character to shoot the bad guys.

And that's where the problems come in. The WDDM scheduler creates overhead on the system and introduces latency. There's no way around creating any overhead at all, but running on the CPU adds an extra step when every millisecond matters. 

In an optimal scenario, as the GPU is rendering one frame, the CPU would be busy planning out the next frame. In practice, this is exactly how today's WDDM scheduler works, but working with such small tasks on a frame-by-frame basis creates a massive load on the CPU, which is why performance when running games at a low resolution and high framerates is very CPU dependent.

To reduce this overhead, today's games are able to command the CPU to generate multiple frame commands, sending them to the GPU in batches. This used to be an optional feature where you could manually pick the number of frames buffered, but has since become a balancing act that happens in the background without your knowledge. This pre-planning of frames is known as frame buffering, but you can undoubtedly already see how this is problematic: what you see on-screen can and will run a few frames behind your inputs when the CPU needs to reduce its load.

The Dilemma: Low Input Latency, or Reduced CPU Load

When we tested the effects of GPU Hardware Scheduling, we were using a system with an Nvidia RTX 2080 Ti and an Intel Core i9-9900K, which was the best gaming CPU money could buy (until the Core i9-10900K came around).

However, with a processor this powerful, scheduling GPU frames isn't the most demanding task for your central piece of silicon, and you won't really have to choose between reducing input latency or reducing the CPU load. The i9-9900K is so powerful, why not let the chip schedule the frames one by one? As if you care that your expensive CPU is proving it's money's worth?

But not everyone has a $500 CPU to play games on, and that's where GPU hardware scheduling should make a bigger difference when gaming.

GPU Hardware Scheduling Should Benefit Low-End CPUs More

Nvidia's Pascal and Turing GPUs, and AMD's RDNA graphics cards, all have purpose-built hardware scheduling baked into their silicon. This GPU scheduler is much more efficient at scheduling work than your CPU, and it doesn't require going back and forth over the PCIe bus.

However, switching from software scheduling on the CPU to hardware scheduling on the GPU fundamentally changes one of the pillars of the graphics subsystem. It affects the hardware, the operating system, the drivers, and how the games are coded, which is why it's taken this long to become a thing. 

The transition to hardware-accelerated GPU scheduling isn't going to be an easy one. That's why Microsoft isn't yet enabling the feature by default, but rather as an opt-in setting. You can find it under Settings -> System -> Display -> Graphics Settings, but you'll need to be updated to the latest version of Windows 10 (May 2020, build 2004) and have the right AMD or Nvidia drivers installed on your system.

Enabling the feature today shouldn't lead to any issues, but as with any technology this young, try not to be surprised if it does. Given the opt-in nature, it could be months (years?) before we realize the full benefits of GPU hardware scheduling. But if all goes as planned, pairing a powerful graphics card with a mid-tier CPU is about to make a whole lot more sense for gaming.

Niels Broekhuijsen

Niels Broekhuijsen is a Contributing Writer for Tom's Hardware US. He reviews cases, water cooling and pc builds.