Why you can trust Tom's Hardware
Intel Core i9-13900KS Thermals, Power Consumption, and Boost Clocks
Intel's Adaptive Boost Technology (ABT) dynamically boosts to higher all-core frequencies based on available thermal headroom and electrical conditions, so peak frequencies can vary. By design, this tech allows the chip to operate at 100C during normal operation — if the chip runs under the 100C threshold, it will increase its power consumption until it reaches the safe 100C limit, thus providing more performance. However, this feature is only active on the Core i9-13900K/F and the Core i9-13900KS, so other Raptor Lake processors won't exhibit the same behavior.
You can think of ABT much like a dynamic auto-overclocking feature, but because the chip stays within Intel's spec of a safe 100C temperature limit, it is a supported and warrantied feature that doesn't fall into the same classification as overclocking. ABT uplift will vary by chip — much of the frequency uplift depends upon the quality of your chip. Hence, the silicon lottery comes into play, along with cooling and power delivery capabilities.
Remember, AMD's Ryzen 7000 also runs at its limit of 95C at stock settings, so higher temperatures have become the norm for both chipmakers.
To remove thermals as a limitation, we always use the same 280mm Corsair H115i AIO cooler for all our test systems. However, Intel says that it achieved the best results with the KS paired with a 360mm radiator, so we tested the impact by running the same tests with all power limits removed on a 280mm and a 360mm AIO. Our single-threaded test also shows that the 13900KS boosted to 6 GHz frequently, regardless of the cooler.
The chip reached 100C with both coolers during a series of heavily multi-threaded apps, like y-cruncher, Cinebench, Blender, and POV-Ray, just as Intel designed it to do. The 360mm cooler config ran the same 5.6 GHz all-core clock during heavy work as the 280mm setup, yet it also consumed around 20-25W more during some types of work.
That 360mm's additional cooling capacity enables that extra bit of power consumption, but the ~7% to 9% increase occurs at the top of the chips' voltage frequency curve where increased power consumption is incredibly inefficient: As you near peak power, double-digit percentage increases in power consumption often only yield single-percentage performance gains. That means this slightly higher power will not make much difference in actual benchmarks, which we'll detail below.
We also overclocked the chip via turbo multipliers to 6 GHz on two p-cores and 5.7 GHz when more than two cores are active, while dialing in a 4.4 GHz all-core overclock on the e-cores. The chip only required a 1.29V vCore to sustain these frequencies, indicating it is a cherry chip. The overclocked config uses less power than stock settings, showing that the chips' native power management is inefficient. As you'll see below, the overclocked settings enabled much more performance than the stock settings that consumed more power.
The chip barely pushed over 300W with standard applications, but switching gears to the strenuous but not-at-all-realistic Prime95 stress test yielded a much higher peak power consumption of 328W. Power consumption leveled off after a short period, so it's possible a custom watercooling loop could allow the chip to consume more for a longer period of time. But, again, this won't result in very meaningful performance improvements in real-world workloads — this chip is tuned to the absolute top of its voltage/frequency curve.
Tom's Hardware - Prime95 | Peak Power | Average Power |
13900KS, No Power Limit, 360mm AIO | 328W | 295W |
13900KS, Overclocked, 360mm AIO | 321W | 296W |
There isn't a huge difference in cooling capacity between our 280mm and 360mm AIOs (we'd see a bigger difference with a 240mm vs 360mm comparison). As you can see in our cumulative performance measurements above, the improved cooling doesn't result in a linear improvement in performance in our gaming or application benchmarks.
The improvement in real-world gaming and productivity applications with the 360mm cooler was around 1%. As such, we used the test results from our 280mm configuration for our gaming and productivity benchmarks.
Our overclock gave us an extra 5% in 1080p gaming and 4% in threaded work, but be aware that the increased memory throughput (we used DDR5-6800 for the overclocked config) is a big contributor here. Additionally, the overclocked vanilla 13900K trailed by only 1% in games and 2.5% in threaded apps. Further tuning, or more luck in the silicon lottery, could narrow that gap.
Temperatures can limit your performance during stock operation, so if you purchase the 13900KS, plan for a powerful cooler to extract the full performance. We think a 280mm AIO would be sufficient, but if you're chasing the last 1% of performance, a 360mm AIO will get you there.
- MORE: AMD vs Intel
- MORE: Zen 4 Ryzen 7000 All We Know
- MORE: Raptor Lake All We Know
Current page: Intel Core i9-13900KS Thermals, Power Consumption, and Boost Clocks
Prev Page The 6 GHz Special Next Page Intel Core i9-13900KS Overclocking, Power Consumption, Test SetupPaul Alcorn is the Managing Editor: News and Emerging Tech for Tom's Hardware US. He also writes news and reviews on CPUs, storage, and enterprise hardware.
Nvidia GeForce 256 celebrates its 25th birthday — company talks about a quarter century of GPU progression
Intel LGA1851 socket has a new ILM that helps CPUs run a bit cooler — MSI claims 1ºC to 2ºC lower CPU temperatures
Quartz mine crucial for making chips reopens ten days after Hurricane Helene's devastation
-
Brian D Smith Less 'overclocking' and more on 'underclocking' articles please.Reply
That would be helpful for the ever growing segment who does NOT need the testosterone rush of having the 'fastest' ... and wants more info on the logical underclocking to...well, do things like get the most out of a CPU without the burden of water-cooling, it's maintenance and chance of screwing up their expensive systems.
These CPU's and new systems would be flying off the shelves much faster than they are if only people did not have to take such measures for all the heat they generate. It's practically gone from being able to 'fry an egg' on a CPU to 'roasting a pig'. :( -
bit_user Seems like the article got a new comment thread, somehow. The original thread was:Reply
https://forums.tomshardware.com/threads/intel-core-i9-13900ks-review-the-worlds-first-6-ghz-320w-cpu.3794179/
I'm guessing because it had previously been classified as a News article and is now tagged as a Review. -
bit_user Thanks for the thorough review, @PaulAlcorn !Reply
Some of the benchmarks are so oddly lopsided in Intel's favor that I think it'd be interesting to run them in a VM and trap the CPUID instruction. Then, have it mis-report the CPU as a Genuine Intel of some Skylake-X vintage (because it also had AVX-512) and see if you get better performance than the default behavior.
For the benchmarks that favor AMD, you could try disabling AVX-512, to see if that's why.
Whatever the reason, it would be really interesting to know why some benchmarks so heavily favor one CPU family or another. I'd bet AMD and Intel are both doing this sort of competitive analysis, in their respective labs. -
letmepicyou
Well, we've seen the video card manufacturers code drivers to give inflated benchmark results in the past. Is it so outlandish to think Intel or AMD might make alterations in their microcode or architecture in favor of high benchmark scores vs being overall faster?bit_user said:Thanks for the thorough review, @PaulAlcorn !
Some of the benchmarks are so oddly lopsided in Intel's favor that I think it'd be interesting to run them in a VM and trap the CPUID instruction. Then, have it mis-report the CPU as a Genuine Intel of some Skylake-X vintage (because it also had AVX-512) and see if you get better performance than the default behavior.
For the benchmarks that favor AMD, you could try disabling AVX-512, to see if that's why.
Whatever the reason, it would be really interesting to know why some benchmarks so heavily favor one CPU family or another. I'd bet AMD and Intel are both doing this sort of competitive analysis, in their respective labs. -
bit_user
Optimizing the microcode for specific benchmarks is risky, because you don't know that it won't blow up in your face with some other workload that becomes popular in the next year.letmepicyou said:Is it so outlandish to think Intel or AMD might make alterations in their microcode or architecture in favor of high benchmark scores vs being overall faster?
That said, I was wondering whether AMD tuned its branch predictor on things like 7-zip's decompression algorithm, or if it just happens to work especially well on it.
To be clear, what I'm most concerned about is that some software is rigged to work well on Intel CPUs (or AMD, though less likely). Intel has done this before, in some of their 1st party libraries (Math Kernel Library, IIRC). And yes, we've seen games use libraries that effectively do the same thing for GPUs (who can forget when Nvidia had a big lead in tessellation performance?). -
hotaru251 Intel: "We need a faster chip"Reply
eng 1: what if we make it hotter & uncontrollably force power into it?
eng 2: what if we try soemthign else that doesnt involve using guzzling power as answer?
intel: eng1 you're a genius! -
bit_user
Part of the problem might be in Intel's manufacturing node. That could limit the solution space for delivering competitive performance, especially when it also needs to be profitable. Recall that Intel 7 not EUV, while TSMC has been using EUV since N7.hotaru251 said:Intel: "We need a faster chip"
eng 1: what if we make it hotter & uncontrollably force power into it?
eng 2: what if we try soemthign else that doesnt involve using guzzling power as answer?
intel: eng1 you're a genius! -
froggx
Intel has at least once in the past disabled the ability to undervolt. Look up the "plundervolt" vulnerability. Basically around 7th and 8th gen CPUs it was discovered that under very specific conditions that most users would never encounter, undervolting allowed some kind of exploit. The solution: push a windows update preventing CPU from being set below stock voltage. I have a kaby lake in a laptop that was undervolted a good 0.2v, knocked a good 10°C off temps. One day it started running hotter and surprise! I can still overvolt it just fine though, I guess that's what matters for laptops. Essentially, as useful as undervolting can be, Intel doesn't see it as something worthwhile compared to "security."Brian D Smith said:Less 'overclocking' and more on 'underclocking' articles please.
That would be helpful for the ever growing segment who does NOT need the testosterone rush of having the 'fastest' ... and wants more info on the logical underclocking to...well, do things like get the most out of a CPU without the burden of water-cooling, it's maintenance and chance of screwing up their expensive systems.
These CPU's and new systems would be flying off the shelves much faster than they are if only people did not have to take such measures for all the heat they generate. It's practically gone from being able to 'fry an egg' on a CPU to 'roasting a pig'. :( -
TerryLaze
Being able to withstand higher extremes is a sign of better manufacturing not worse.bit_user said:Part of the problem might be in Intel's manufacturing node. That could limit the solution space for delivering competitive performance, especially when it also needs to be profitable. Recall that Intel 7 not EUV, while TSMC has been using EUV since N7.
Intel CPUs can take a huge amount of W and also of Vcore without blowing up, these are signs of quality.
TSMC getting better is how AMD was able to double the W in this generation.
You don't have to push it just because it is pushable.
jv3uZ5VlnngView: https://www.youtube.com/watch?v=jv3uZ5Vlnng