Core Ultra 9 285K is slower than Core i9-14900K in gaming, according to leaked Intel slide — Arrow Lake consumes less power, though

Leaked image of alleged Intel's Core Ultra 9 285K CPU
(Image credit: VideoCardz)

Intel's upcoming Core Ultra 9 processor is set to become the company's new top-of-the-range processor for desktops. However, it is not going to beat the previous-generation flagship Core i9-14900K in games, according to alleged Intel slides leaked by wxnod, who tends to have access to this kind of documents ahead of product launches (still keep in mind that we are dealing with a leak). There is a catch, though: the new CPU is significantly more energy efficient than its predecessor.

One of the Intel slides indicates that while the average frames per second (FPS) in a set of games on a Core Ultra 9 285K system is 261 FPS, the average framerate on a Core i9-14900K is 264 FPS, according to one leaked slide. The difference is hardly significant. What is substantial is the difference in power consumption between the previous-generation and next-generation desktop platforms. The Core Ultra 9 285K-based machine consumes around 447W, whereas the Core i9-14900K consumes 80W more: 527W.

A more detailed slide indicates that Core Ultra 9 285K can be up to 13% slower than the Core i9-14900K (in Far Cry 6) or 15% faster than its predecessor (F1 23). In many games, the results of the two systems are similar. Yet, in multiple cases, the new codenamed Arrow Lake processor consumes 34W – 165W less than the previous-generation Raptor Lake Refresh CPU. 

It should be noted that the tests were carried out in a 1080p resolution, which, on the one hand, allows for demonstration of the most significant difference between CPUs but, on the other hand, dramatically reduces the practical value of these test results as demanding gamers who buy high-end CPUs, such as the Core i9-14900K, rarely play in such a low resolution.

Compared to AMD's Ryzen 9 7950X3D, the new Core Ultra 9 285K processor is consistently faster in content creation applications, including PugetBench, Blender, Cinebench 2024, and POV-Ray. As for game benchmarks, the upcoming flagship CPU from Intel can be 21% slower than AMD's Ryzen 9 7950X3D (Cyberpunk 2077) or 15% faster (Rainbow Six: Siege).

Unfortunately, the leaked slides do not cover the performance difference between Intel's upcoming Core Ultra 9 285K and its direct rival, AMD's Ryzen 9 9950X, so we do not really know how Intel's upcoming flagship stacks up against AMD's range-topping model. The good news is that Intel's Arrow Lake reviews are coming soon, so it will not take long before we learn everything we need to know about the new CPUs.

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • Stesmi
    I know, I know, I may be nitpicking, but how on earth is Cinebench 2024 a content creation application, unless you mean a tool used by benchmarkers to make... content?
    Article said:
    ... consistently faster in content creation applications, including PugetBench, Blender, Cinebench 2024, and POV-Ray.
    Reply
  • philipemaciel
    "The Core Ultra 9 285K-based machine consumes around 447W"

    I remember when the 220W FX-9590 was released and all the due and fair criticism it received.

    How an abhorrence like this (nevermind the 14900K being worse) is even seeing the light of the day.
    Reply
  • TheHerald
    philipemaciel said:
    "The Core Ultra 9 285K-based machine consumes around 447W"

    I remember when the 220W FX-9590 was released and all the due and fair criticism it received.

    How an abhorrence like this (nevermind the 14900K being worse) is even seeing the light of the day.
    I know people can never miss the chance to dunk on Intel but the article is talking about gaming, in gaming the majority of the power draw is the GPU, assuming he is testing with a high end GPU 300+ of those 447w is the GPU itself.
    Reply
  • Dustyboy1492
    We'll see after some driver optimization, most new products take a bit to maximize. I'd be curious to see the die size comparison as well, I'd bet 14th gen is larger.
    Reply
  • abufrejoval
    Stesmi said:
    I know, I know, I may be nitpicking, but how on earth is Cinebench 2024 a content creation application, unless you mean a tool used by benchmarkers to make... content?
    It does tend to get lost that Cinebench isn't actually the product Maxon is earning money with, but started mostly as a tool to evaluate hardware to use with their content creation software.

    To my knowledge they used CPU rendering for the longest time there, to match the quality expectations of their clients.

    But now that Maxon (and Cinebench) seems to support high quality rendering also via GPUs, actually using a strong CPU to do Maxon based content creation would be a bad idea.

    In GPU rendering via Cinebench 2024 even an RTX 4060 seems to beat my beefiest Ryzen 7950X3D and that one has an RTX 4090, which might even put rather big EPYCs to shame.

    Nobody in his right mind should therefore actually continue to use CPU rendering for Maxon content creation, just like Handbrake is a very niche tool in video content conversion dominated by ASICs doing dozens of streams in real-time: both just happen to be readily available to testers, not or no longer useful as such.

    Publishers make money from creating attention for vendor products that may have very little real-life advantages over previous gen products.

    So a car that now has 325km/h top speed vs 312km/h in the previous generation gets a lot of attention, even if the best you can actually hope to achieve is 27km/h in your daily commuter pileups.
    Reply
  • TheHerald
    abufrejoval said:
    It does tend to get lost that Cinebench isn't actually the product Maxon is earning money with, but started mostly as a tool to evaluate hardware to use with their content creation software.

    To my knowledge they used CPU rendering for the longest time there, to match the quality expectations of their clients.

    But now that Maxon (and Cinebench) seems to support high quality rendering also via GPUs, actually using a strong CPU to do Maxon based content creation would be a bad idea.

    In GPU rendering via Cinebench 2024 even an RTX 4060 seems to beat my beefiest Ryzen 7950X3D and that one has an RTX 4090, which might even put rather big EPYCs to shame.

    Nobody in his right mind should therefore actually continue to use CPU rendering for Maxon content creation, just like Handbrake is a very niche tool in video content conversion dominated by ASICs doing dozens of streams in real-time: both just happen to be readily available to testers, not or no longer useful as such.

    Publishers make money from creating attention for vendor products that may have very little real-life advantages over previous gen products.

    So a car that now has 325km/h top speed vs 312km/h in the previous generation gets a lot of attention, even if the best you can actually hope to achieve is 27km/h in your daily commuter pileups.
    My 4090 is about 23 times faster than my 12900k, but that's not the point of Cinebench. It is used to see the maximum performance of a CPU. Testing something that only uses 2 or 4 cores might lead you to believe that a 7600x is as fast as a 7950x completely missing the fact that the 7950x can run 3 times as many of those workloads with 0 slowdown.
    Reply
  • Stesmi
    abufrejoval said:
    It does tend to get lost that Cinebench isn't actually the product Maxon is earning money with, but started mostly as a tool to evaluate hardware to use with their content creation software.

    To my knowledge they used CPU rendering for the longest time there, to match the quality expectations of their clients.
    Oh yeah, for sure. It really wasn't that long ago that CPU rendering was the only thing used. Or, to me, having used 68000 for raytracing, it's not ... such... a long... time ago. Real3D I think it was called.
    abufrejoval said:

    But now that Maxon (and Cinebench) seems to support high quality rendering also via GPUs, actually using a strong CPU to do Maxon based content creation would be a bad idea.

    In GPU rendering via Cinebench 2024 even an RTX 4060 seems to beat my beefiest Ryzen 7950X3D and that one has an RTX 4090, which might even put rather big EPYCs to shame.

    Nobody in his right mind should therefore actually continue to use CPU rendering for Maxon content creation, just like Handbrake is a very niche tool in video content conversion dominated by ASICs doing dozens of streams in real-time: both just happen to be readily available to testers, not or no longer useful as such.
    Yeah, the only place it makes sense is if you want to use some option that your asic / hardware rendered doesn't support, but then again, I'm sure offloading a video encode to the computer cores (not hardware video encoding) would maybe be faster than pure CPU - it's just not done, as the dedicated encoder is faster, even though it may not produce higher quality per bitrate.
    abufrejoval said:

    Publishers make money from creating attention for vendor products that may have very little real-life advantages over previous gen products.

    So a car that now has 325km/h top speed vs 312km/h in the previous generation gets a lot of attention, even if the best you can actually hope to achieve is 27km/h in your daily commuter pileups.
    Yeah, also halo-cars. "Oh look at that car with a twin turbo, supercharged V12!" "I'll go buy the one with the 3 cylinder that looks sort of the same." And guess what? It works. Please don't take the example as a real vehicle.
    Reply
  • abufrejoval
    TheHerald said:
    My 4090 is about 23 times faster than my 12900k, but that's not the point of Cinebench. It is used to see the maximum performance of a CPU. Testing something that only uses 2 or 4 cores might lead you to believe that a 7600x is as fast as a 7950x completely missing the fact that the 7950x can run 3 times as many of those workloads with 0 slowdown.
    The point of Cinebench is to evaluate hardware for Maxon work. That's what it is designed and maintained for, while Maxon may also see it as a nice marketing tool.

    The point of reviewers using Cinebench is to compare CPUs, ...somehow.

    I'd argue that the latter transitions to abuse, when you argue that a faster CPU will help you create content faster or better. Clearly you might be better off with a GPU today, perhaps even with one of those iGPUs these SoCs have, once those are supported by your content creation tools.

    I completely understand the dilemma reviewers find themselves in, I just wish they'd occasionally reflect on if the standard text blocks they've been using for the last ten years recommending ever more and more powerful CPU cores for "things like content creation" need to be adapted these days.

    It's gotten to the point where it's no longer informational and bordering on a lie. And not everyone has been in the business long enough to understand what they actually want to imply: newbies might take them at face value!

    These days nearly any use case that used to take lots of CPUs to solve gets bespoke hardware, even neural nets, when I'd prefer using that real-estate on a laptop for something useful like V-cache.
    Reply
  • bit_user
    Stesmi said:
    I know, I know, I may be nitpicking, but how on earth is Cinebench 2024 a content creation application, unless you mean a tool used by benchmarkers to make... content?
    Cinebench is a benchmark tool designed to characterize how fast rendering in Cinema4D will run. That's it's original purpose. Someone doing software rendering on their PC will pay close attention to it and Blender benchmarks, because those should be predictive of what kind of rendering performance they'll experience.

    abufrejoval said:
    But now that Maxon (and Cinebench) seems to support high quality rendering also via GPUs, actually using a strong CPU to do Maxon based content creation would be a bad idea.
    I've read people claiming they still use CPUs for rendering large scenes that won't fit in the amount of memory available on consumer GPUs. I'm not sure how big an issue this is specifically for Cinema 4D.

    abufrejoval said:
    Nobody in his right mind should therefore actually continue to use CPU rendering for Maxon content creation, just like Handbrake is a very niche tool in video content conversion dominated by ASICs doing dozens of streams in real-time: both just happen to be readily available to testers, not or no longer useful as such.
    For the longest time, it was said that you needed software video encoders, if you wanted the best possible quality.
    Reply
  • bit_user
    Intel is probably now thinking they should've slowed down Raptor Lake even more, with their final mitigation for the degradation problem.
    ; )
    On a more serious note, this is one of the main theories I had for why Intel would do Bartlett Lake. Plus, I never believed that BS story about how it was intended for some communications vertical, especially given that it's socketed. And, when a full lineup of the Bartlett Lake family leaked, a couple months ago, it finally put that sorry excuse to bed.

    There are only two good reasons for Intel to do it: 1) Arrow Lake is too weak in key use cases (e.g. gaming) and 2) Arrow Lake & its platform will be priced unattractively to certain budget markets.
    Reply