MSI Releases BIOS Updates Enabling Intel APO For 14th Gen CPUs

MSi BIOS updates for APO
(Image credit: MSI)

Earlier this month MSI started rolling out BIOS updates adding support for Intel's APO (Application Optimization) technology that's built into the Intel Extreme Tuning Utility (XTU). This was pointed out by @ghost_motley on X, but the news actually comes with a few caveats.

The biggest caveat is, of course, that Intel APO is only supported by 14th Gen Intel CPUs even though many of the impacted motherboards can also be used with previous-generation Intel CPUs. Considering how close 13th Gen and 14th Gen Intel processors are in raw performance and design, critics are already pointing out how this is a seemingly arbitrary feature to lock to 14th Gen purchasers.

Not even all 14th Gen Intel users get to enjoy the fruits of APO, though— at least not yet. MSI's rollout of Intel APO is currently restricted to the Intel Core i7-14700K, Core i7-14700KF, Core i9-14900K, and Core i9-14900KF. Since Intel APO is targeted at improving gaming performance in supported titles, the omission of Core i5 support is striking here and will hopefully be addressed in the future.

So, what makes Intel APO such a big deal? Well, the gaming performance gains offered by Intel APO can be pretty startling — up to a 31% performance increase, even. Performance gains of this magnitude can make Intel CPUs even better for gaming than the best Ryzen X3D CPUs, though many more games would have to support APO in order to make buying decisions based on its performance boons.

Rainbow Six Siege

(Image credit: Steam)

For now, Intel APO shows a great deal of promise but has very little support, even within its own intended target audience of 14th Gen Intel users. With any luck, 14th Gen Intel Core i5 users will eventually be able to enjoy the benefits of APO as well, but for now, MSI motherboard-owning Core i5 users are out of luck.

Additionally, Intel APO currently only supports Rainbow Six: Siege and Metro Exodus, which further reduces its (realistic) impact on the PC gaming market at large.

How To Choose a CPU

TOPICS
Christopher Harper
Contributing Writer

Christopher Harper has been a successful freelance tech writer specializing in PC hardware and gaming since 2015, and ghostwrote for various B2B clients in High School before that. Outside of work, Christopher is best known to friends and rivals as an active competitive player in various eSports (particularly fighting games and arena shooters) and a purveyor of music ranging from Jimi Hendrix to Killer Mike to the Sonic Adventure 2 soundtrack.

  • emike09
    If Intel wants APO to win, they need to get massively aggressive with it. In the way that Nvidia pushes DLSS so hard. Haters hate DLSS, but DLSS is amazing, and it's just software optimizations of existing (though dedicated) hardware. Nobody will care about APO unless it gets used by the masses of gaming.
    Reply
  • bit_user
    In case anyone cares, my take on APO is that it's a band-aid solution to what's better addressed through application, API, and OS design.

    The problem it's trying to solve is essentially how to prioritize different threads in an application. Where "AI" enters the picture is that the APO software must try to guess which threads need how much priority. This information is used to determine how best to apportion the CPU's power budget to the various cores, depending on which threads are running on them.

    If you imagine a world where the OS provided a prioritization scheme and games used it correctly to characterize how latency-sensitive different threads are, then the OS could employ fairly generic strategies for scheduling those threads and distributing the power budget, accordingly.

    Sadly, this stuff takes a long time to change. Worse, the presence of such band-aid solutions lessens incentives for players like Intel to push for such generic solutions. However, I think a well-designed and properly-utilized generic solution could actually do a better job than APO would typically achieve.
    Reply
  • thestryker
    bit_user said:
    However, I think a well-designed and properly-utilized generic solution could actually do a better job that APO would typically achieve.
    It'd be much better for the end users and industry in general, but APO seems to utilize access that Intel will never open up to anyone so it would be extremely hard to match performance.
    bit_user said:
    If you imagine a world where the OS provided a prioritization scheme and games used it correctly to characterize how latency-sensitive different threads are, then the OS could employ fairly generic strategies for scheduling those threads and distributing the power budget, accordingly.
    This to me is a pretty big failing on Microsoft's part as they realistically only have two vendors to deal with. AMD and Intel have a lot of built in tools for controlling their CPUs, but Microsoft has done very little to optimize.

    Every time I start up my Ally it makes me a little sad that to get optimal power/performance you're forced to use third party software. This part is something Valve has absolutely nailed with the SteamOS on Steam Deck.
    Reply
  • bit_user
    thestryker said:
    It'd be much better for the end users and industry in general, but APO seems to utilize access that Intel will never open up to anyone so it would be extremely hard to match performance.
    I don't know much about it, but I believe they did add support for their Thread Director to Linux. To the extent it just relies on that + their normal interface for managing CPU core clockspeed & power utilization, then I think it should be possible to replicate.
    Reply
  • thestryker
    bit_user said:
    I don't know much about it, but I believe they did add support for their Thread Director to Linux. To the extent it just relies on that + their normal interface for managing CPU core clockspeed & power utilization, then I think it should be possible to replicate.
    If it was that simple it wouldn't be limited to 4 SKUs they'd have covered all of the current 14th gen.
    Reply
  • bit_user
    thestryker said:
    If it was that simple it wouldn't be limited to 4 SKUs they'd have covered all of the current 14th gen.
    They implied the reason why they had to limit it is that their deep learning model controls how much power to send to which cores. I don't know if they're aware of which threads are even running on which cores! I think their deep learning model tries to infer that, based on the activity pattern.

    As I said, it's a band-aid solution. Because they're doing an end-run around the proper way to manage thread priorities and resources, they have to rely on heuristics that are specific to both the application and the hardware. A clean solution should be much more scalable and portable.
    Reply
  • thestryker
    bit_user said:
    They implied the reason why they had to limit it is that their deep learning model controls how much power to send to which cores. I don't know if they're aware of which threads are even running on which cores! I think their deep learning model tries to infer that, based on the activity pattern.
    Where did you see any of this? Intel has been real cagey about the APO, but everything they've put out directly refers to custom testing with hardware/software.
    Reply
  • bit_user
    thestryker said:
    Where did you see any of this? Intel has been real cagey about the APO, but everything they've put out directly refers to custom testing with hardware/software.
    Based on the clues they've provided, the only thing I can imagine is that they're using AI to detect which threads are running on which cores and to inform how to adopt per-core frequencies on that basis. That makes it both application and CPU-specific, which they've also said.

    We know the underlying hardware didn't change, so the only special hardware they're using should be the Thread Director - and all that does is collect stats characterizing the behavior of each thread.
    Reply
  • thestryker
    bit_user said:
    Based on the clues they've provided, the only thing I can imagine is that they're using AI to detect which threads are running on which cores and to inform how to adopt per-core frequencies on that basis. That makes it both application and CPU-specific, which they've also said.

    We know the underlying hardware didn't change, so the only special hardware they're using should be the Thread Director - and all that does is collect stats characterizing the behavior of each thread.
    It's interfaced through DTT which polls internal sensors Intel's been using since TGL that is used predominantly for power management (designed for notebooks). Thread Director undoubtedly comes into play in a specialized manner, but shouldn't be doing anything new here.

    It's the application part which I'm trying to figure out since it doesn't need to be integrated into said application and a generic model wouldn't really work. At the same time I can't imagine anything more than a profile would be integrated into the APO application itself (it's ~10MB).
    Reply
  • bit_user
    thestryker said:
    It's the application part which I'm trying to figure out since it doesn't need to be integrated into said application and a generic model wouldn't really work. At the same time I can't imagine anything more than a profile would be integrated into the APO application itself (it's ~10MB).
    You don't actually need a very big deep learning model to map from a set of thread parameters to core frequency/power settings. In fact, in order to be fast, the model needs to be small.
    Reply