Report: Intel Battlemage arriving in 2024, Arrow Lake will consume 100W less power than 14th Gen, overclocking unaffected by latest Raptor Lake microcode updates
New Intel statements reveal that Arrow Lake will be significantly more power efficent than Raptor Lake and won't succumb to Raptor Lake's stability issues.
Intel has purportedly made some huge announcements in China surrounding its upcoming Arrow Lake CPUs and Battlemage GPUs, as well its Raptor Lake CPU microcode update. A Chinese reporter on Weibo (sourced via VideoCardz) reports that Arrow Lake will have huge efficiency gains, consuming 100W less power than Raptor Lake, and that Intel's next-gen discrete GPUs, codename Battlemage, will arrive within the year.
Before going any further, it's worth mentioning that we can't verify any of Intel's claims made by this reporter. As a result, take this info with a grain of salt until Intel confirms these statements officially.
The Chinese reporter — known as Little Pigeon — was invited to a secret Intel and Asus internal exchange meeting where Intel announced all of these details. At the event, Intel showed a slide announcing updates regarding its upcoming Raptor Lake CPU microcode update, Arrow Lake, and Battlemage.
The slide states that Raptor Lake BIOS updates will not affect the turbo-boosting functionality of its 13th and 14th-generation K-series CPUs and that overclocking functionality will be maintained for these chips. (Intel is referring to the mid-August microcode update that aims to rectify Raptor Lake's instability issues.) In conjunction with this, Intel also specified that it is adding an additional two years of warranty coverage to its 13th and 14th generation i5 K-series and KF-series processors and above. (The warranty info has already been confirmed by Intel.)
Arrow Lake will purportedly consume "at least 100W" less power than Raptor Lake CPUs while "maintaining high frequencies." "The updated process [node] will eliminate previous high voltage issues, ensuring stability." Additionally, the Intel slide revealed that Arrow Lake performance is "expected to be impressive," but didn't disclose how much faster Arrow Lake will be.
Last but not least, Intel's next-generation GPU architecture Battlemage is suppsedly coming this year "with significant performance improvements" under the hood.
If these statements are true, it suggests that Intel's next-generation Arrow Lake platform will not have the same problems/behavior as Raptor Lake, including its sky-high power consumption. Intel hints that Arrow Lake will be a much more power-efficient architecture that will help Intel close the gap to AMD and its latest Ryzen 9000 series processors in terms of power efficiency.
Intel's statement regarding Battlemage is another confirmation that Intel wants to release its next-gen GPU architecture this year, rather than 2025. A previous report stated that Intel wants to launch Battlemage by fall to capture holiday sales — possibly in line with the Nvidia RTX 50 series launch (though Blackwell may now be delayed until 2025).
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Aaron Klotz is a contributing writer for Tom’s Hardware, covering news related to computer hardware such as CPUs, and graphics cards.
-
ThomasKinsley "The updated process will eleminate previous high voltage issues, ensuring stability."
I'm trying to wrap my brain around this sentence. I suppose it's already known that 13th and 14th gen defects are at the hardware level, but can a microcode fix genuinely solve it without causing any performance penalty? -
Eximo The concept is that the voltage asked for was spiking up higher then it was supposed to due to errors in the Thermal Velocity Boost code. So allowing excess voltage while the CPU was hot.Reply
A fix is as simple as preventing those voltages from being called for until the CPU temperatures are normal. As long as the CPU boost behavior remains similar, performance impact should be minimal.
It doesn't solve the damage that was done though.
Also, yay Battlemage! I have high hopes for a AsRock B770 as a replacement for my 3080 Ti. I want to put my last EVGA card up on a shelf. -
Konomi
Hypothetically, if the only two issues were indeed the via oxidation issue that supposedly was fixed and elevated voltages, then it would be down to silicon lottery whether there'd be any downsides in terms of performance due to microcode changes. Some CPUs may not boost as high as a result while others may not see any differences outside of potentially running cooler. Ultimately we'll have to see - I'm certain there's more changes outside of the eTVB fix that we're not being told about yet, until people get their hands on updates, who knows what could happen.ThomasKinsley said:I'm trying to wrap my brain around this sentence. I suppose it's already known that 13th and 14th gen defects are at the hardware level, but can a microcode fix genuinely solve it without causing any performance penalty? -
setgree Why would Intel, an American company, reveal its secret plans to a Chinese journalist at an even no one's ever heard of?Reply
These are all good guesses and they might turn out to be true but that doesn't mean this person knows what they're talking about -
rluker5
If the main cause of progressing instability is exposure voltages over 1.5v then AMD has this known hardware defect much worse. 1.35v can cause parts of their chips to explode. And we don't even have to mention Nvidia. Could you imagine one of their GPUs at 1.5v?ThomasKinsley said:I'm trying to wrap my brain around this sentence. I suppose it's already known that 13th and 14th gen defects are at the hardware level, but can a microcode fix genuinely solve it without causing any performance penalty?
But really, it isn't already known that 13th and 14th gen are inherently defective at the hardware level. Why are you passing that conjecture off as established fact? The exact causes and extent of damaged hardware from each is not yet determined. -
Alvar "Miles" Udell Next generation parts will be faster and more power efficient!Reply
Well yeah, AI is the only place where they can get away with being less power efficient... -
thestryker We'll have a good idea as to the baseline Battlemage performance when LNL launches next month. Personally I'm more interested in whether or not they've solved the hardware issues that Alchemist has had which cause its performance to vary wildly than overall performance increase. Just one of those things where if it's 50% faster than when it performs lower than a 6600 this isn't a big leap compared to if it's 50% faster than when it performs like a 3070.Reply
As for ARL power consumption I doubt it'll be that much lower in desktop form unless comparing unrestricted RPL. I just don't see Intel going. from ~250W to ~150W parts so if that does happen to be true it'll be workload specific. There are a lot of things they may be able to do dynamically to lower overall power consumption though. I base this opinion on a couple of RPL facts:
APO has shown higher performance with lower power consumption
It's possible to tune the power consumption to meet or exceed AMD's efficiency in heavy/light workloads, but not both at the same time -
ThomasKinsley
Maybe it's my headache that's giving me trouble, but what I mean was it sounds a little too celebratory. If the microcode fix currently being rolled out resolves the problem with minimal setback, then why brag that the next chip doesn't have the fault? Let's suppose Arrow Lake does have the defect. Would it matter now that Intel has a fix for it.....assuming Intel does have a fix? Apparently Intel thinks so, and that worries me because you only do a victory dance on the new node being defect-free if the current node's microcode update is not perfect.rluker5 said:But really, it isn't already known that 13th and 14th gen are inherently defective at the hardware level. Why are you passing that conjecture off as established fact? The exact causes and extent of damaged hardware from each is not yet determined. -
spongiemaster
This was a confidential presentation to Asus, so the information is likely accurate if it was reported correctly.setgree said:Why would Intel, an American company, reveal its secret plans to a Chinese journalist at an even no one's ever heard of?
These are all good guesses and they might turn out to be true but that doesn't mean this person knows what they're talking about -
thestryker
You're conflating two different things: a bug in the algorithm used to determine operating voltage and high voltage being required for high clocks. A properly functioning RPL part is still going to be demanding 1.5V+ for maximum boost. To put that in perspective for you putting 1.5V through the 6th Gen HEDT part I use will very likely fry it almost immediately. If you look at Zen 4 voltages they go well over 1.4V to get their boost clocks (I'm not sure if they hit 1.5V as I don't have any Zen 4). Hypothetically speaking if Intel is able to pull something like 5.7Ghz on ~1.3V across all CPUs that would be a game changer.ThomasKinsley said:Maybe it's my headache that's giving me trouble, but what I mean was it sounds a little too celebratory. If the microcode fix currently being rolled out resolves the problem with minimal setback, then why brag that the next chip doesn't have the fault?