AMD confirms AI NPU monitoring is coming to Windows Task Manager

AMD
(Image credit: AMD)

AMD confirmed that system monitoring of its XDNA Neural Processing Units (NPUs) is coming shortly to Windows Task Manager through Microsoft's Computer Driver Model. Currently, Windows 11 can only monitor the NPU units in Intel's new Core Ultra Meteor Lake CPUs, but that will change once these updates come in. An exact release schedule was not disclosed, but it is reasonable to assume AMD is probably targeting integration with Microsoft's next AI-focused patch, Windows 11 24H2.

AMD will be using Microsoft's Compute Driver Model (MCDM) to enable Windows 11 to monitor AMD's XDNA NPUs utilization. MCDM is an offshoot of the Microsoft Windows Display Driver Model (WDDM) that is designed specifically for compute-only microchip processors like an NPU. According to AMD, MCDM also enables Windows to manage the NPU, including power management and scheduling similar to the CPU and GPU. This will be important as NPU adoption grows and multiple programs try to run on the NPU simultaneously.

If AMD's implementation is like Intel's implementation, Task Manager will show the compute and copy utilization of the NPU in two separate graphs along with the total and shared amount of memory the NPU is utilizing. These first-generation implementations don't have any dedicated memory, so they will share system RAM. But if future iterations do implement dedicated NPU memory, Task Manager should show that as well, similar to the GPU.

AMD says that Task Manager NPU monitoring will be very important for the future of computing. It believes that NPU monitoring can make software development easier, as well as improve user device optimization, like maximizing battery life.

We are in the infancy of NPU support, but we could very well see many AI-assisted programs being run on these new AI-focused processing units. The main advantage of Neural Processing Units is their localized hardware acceleration capabilities, enabling AI programs to run on the local machine itself (as Nvidia demonstrated with Chat with RTX) compared to a cloud-based solution that can be slower and leak confidential information. NPUs also allow AI workloads to run in environments where there is no internet connection, or where internet service is spotty or unreliable.

AMD is already on its second-generation NPU architecture dubbed XDNA2. The first implementation debuted with AMD's Ryzen 7040 Phoenix CPUs in 2023, featuring 10 TOPS (INT8 teraops) of performance. AMD's XDNA2 version is three times faster and comes in the newer Ryzen 8040 series Strix Point mobile CPUs. AMD's desktop Ryzen 8000G APUs also feature its NPU, but they are all based on AMD's older Phoenix architecture sporting its first generation XDNA chip.

Aaron Klotz
Freelance News Writer

Aaron Klotz is a freelance writer for Tom’s Hardware US, covering news topics related to computer hardware such as CPUs, and graphics cards.

  • Amdlova
    "IA" backdoors
    Reply
  • usertests
    Amdlova said:
    "IA" backdoors
    It's interesting because it can cut both ways.

    Having dedicated machine learning hardware on your machine can allow you to run "AI" stuff locally, with no connection to remote machines where your data is being slurped. You can play with your uncensored local LLMs or Stable Diffusion generation. Maybe it can be used for locally run voice-activated assistants (like Mycroft instead of Alexa) and so on.

    On the other hand, now there's a low-power accelerator that most people may not use 99% of the time, that can be used to do inference tasks in the background to spy on the machine, without taking up other resources. Instead of reporting to the spy cloud constantly, it can wait until it analyzes something suspicious or illegal and send the evidence in one burst. Maybe malware could be programmed to use it to give itself more adaptability than before, again without noticeably impacting system usage and battery life.

    So really, not much will change. People with low understanding of their computer/OS will get screwed by it.
    Reply
  • Joseph_138
    usertests said:
    It's interesting because it can cut both ways.

    Having dedicated machine learning hardware on your machine can allow you to run "AI" stuff locally, with no connection to remote machines where your data is being slurped. You can play with your uncensored local LLMs or Stable Diffusion generation. Maybe it can be used for locally run voice-activated assistants (like Mycroft instead of Alexa) and so on.

    On the other hand, now there's a low-power accelerator that most people may not use 99% of the time, that can be used to do inference tasks in the background to spy on the machine, without taking up other resources. Instead of reporting to the spy cloud constantly, it can wait until it analyzes something suspicious or illegal and send the evidence in one burst. Maybe malware could be programmed to use it to give itself more adaptability than before, again without noticeably impacting system usage and battery life.

    So really, not much will change. People with low understanding of their computer/OS will get screwed by it.

    You know people will be using it to generate AI nudes, without having to log in to a website, right?
    Reply
  • usertests
    Joseph_138 said:
    You know people will be using it to generate AI nudes, without having to log in to a website, right?
    They have already been doing it for over a year now so yeah.

    It's kind of mind boggling that Stable Diffusion has only been out for about 18 months.
    Reply