Intel Core Ultra Series 3 CPUs could finally answer AMD's V-Cache — Nova Lake could boast massive 144MB L3

Core Ultra 200S CPU
(Image credit: Intel)

Intel is rumored to be introducing a new large L3 cache pool that will rival AMD's 3D V-Cache chips with its next-gen Core Ultra (likely Core 400, if Core 300 is a refresh of Arrow Lake) series CPUs. Leakers Haze and Raichu on X believe this new last-level L3 cache will feature a 144MB capacity.

If true, Intel's next-gen L3 cache will be noticeably larger than AMD's 3D V-Cache equivalent. At 144MB in total, Intel's next-gen chips will have 16MB more L3 than multi-CCD Ryzen X3D CPUs such as the Ryzen 9 9950X3D and a whopping 48MB more L3 cache than AMD's highly-popular Ryzen 7 9800X3D.

The extra cache, dubbed bLLC (or big Last Level Cache) will apparently behave similarly to AMD's 3D V-Cache, and be an extra block of cache separate from the integrated (regular) L3 cache attached to each core. The extra block could be mounted two ways, either directly above or below the physical cores, just like AMD's implementation, or installed as a separate tile altogether that connects to the cores through Intel's interconnect.

The next Core Ultra series is rumored to come with a hybrid core configuration just like its current generation counterparts, making these chips potentially the first hybrid CPUs to sport an L3 cache of such magnitude. However, the flagship Nova Lake die will allegedly not receive the larger cache die (not at launch, at the very least). The only core configuration stated by both leakers to come with the bLLC features eight P-cores, 16 E-cores, and four LPE-cores. Although one leaker (Haze) believes a slightly lower-end eight P-core, 12 E-core, four LPE-core configuration will also be available. Regardless, Nova Lake is rumored to have a 16 P-core, 32 E-Core, four LPE-Core flagship model that won't have bLLC.

If Intel introduces bLLC to Nova Lake, Intel could finally have an opportunity to challenge AMD for the gaming performance crown after years of Team Red domination. Nova Lake is Intel's successor to the Core Ultra 200S series (Arrow Lake), and is rumored to have upgraded Coyote Cove P-Cores and Arctic Wolf E-Cores. Additionally, Intel will allegedly be adding LPE cores for the first time ever on desktop, giving Nova Lake even more cores to play with and better power efficiency under low load conditions.

The only negative is that Nova Lake may require a new socket, forcing customers to change motherboards if they want to upgrade to Nova Lake. The new architecture will supposedly take advantage of the LGA 1954 socket, which will reportedly retain the same form factor as the LGA 1851 socket.

Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.

TOPICS
Aaron Klotz
Contributing Writer

Aaron Klotz is a contributing writer for Tom’s Hardware, covering news related to computer hardware such as CPUs, and graphics cards.

  • Notton
    Is it just me?
    I don't see the benefit of E cores or LPE cores on a desktop. It's plugged into a wall and why would you let a high power CPU idle? I'd rather have it complete a job faster with more P-cores, especially if it has an extra large L3 cache.

    I can see the benefit on a battery powered device, or a lower power PC that idles alot, like an everyday PC for browsing, email, and youtube, but a high performance desktop? You have to be kidding me.

    Now, if it's high power and efficient when going full throttle, that I can see being worthwhile.

    My i7-13700K gaming PC does not idle any lower than my former gaming PC with 5800X3D. In my arsenal, the efficiency crown goes to a mini-PC using an i5-12450H, if I don't count the SD7 Gen3+ in my tablet.
    Reply
  • wussupi83
    Notton said:
    Is it just me?
    I don't see the benefit of E cores or LPE cores on a desktop. It's plugged into a wall and why would you let a high power CPU idle? I'd rather have it complete a job faster with more P-cores, especially if it has an extra large L3 cache.

    I can see the benefit on a battery powered device, or a lower power PC that idles alot, like an everyday PC for browsing, email, and youtube, but a high performance desktop? You have to be kidding me.

    Now, if it's high power and efficient when going full throttle, that I can see being worthwhile.

    My i7-13700K gaming PC does not idle any lower than my former gaming PC with 5800X3D. In my arsenal, the efficiency crown goes to a mini-PC using an i5-12450H, if I don't count the SD7 Gen3+ in my tablet.
    I wonder if Intel added e-cores to desktop to squeeze out a little bit extra multi-threaded performance while maintaining certain power envelopes?
    Reply
  • edzieba
    Notton said:
    Is it just me?
    I don't see the benefit of E cores or LPE cores on a desktop. It's plugged into a wall and why would you let a high power CPU idle? I'd rather have it complete a job faster with more P-cores, especially if it has an extra large L3 cache.

    I can see the benefit on a battery powered device, or a lower power PC that idles alot, like an everyday PC for browsing, email, and youtube, but a high performance desktop? You have to be kidding me.

    Now, if it's high power and efficient when going full throttle, that I can see being worthwhile.

    My i7-13700K gaming PC does not idle any lower than my former gaming PC with 5800X3D. In my arsenal, the efficiency crown goes to a mini-PC using an i5-12450H, if I don't count the SD7 Gen3+ in my tablet.
    All desktop CPUs (and GPUs for that matter) have been power-budget - and thus thermal-budget - limited for the better part of a decade. Making components of a CPU die consumer less power means you have more power available for other components. If you can shove background tasks that are time-insensitive (i.e. that will keep running regardless of how long execution takes, like OS background operation) onto a core that uses x watts less power, then that's x extra watts available for your primary core(s) to burst up to to complete time-sensitive tasks faster.
    Reply
  • abufrejoval
    Notton said:
    Is it just me?
    I don't see the benefit of E cores or LPE cores on a desktop. It's plugged into a wall and why would you let a high power CPU idle? I'd rather have it complete a job faster with more P-cores, especially if it has an extra large L3 cache.

    I can see the benefit on a battery powered device, or a lower power PC that idles alot, like an everyday PC for browsing, email, and youtube, but a high performance desktop? You have to be kidding me.
    No it's not just you and I'd argue very much the same.

    But Intel had to use E-cores on the lower power devices, because their P-core performance collapsed at low single digit Watts, while Zen can reach much lower.

    And since they had their Atoms ready for cut & paste they ran the numbers and found that they could even pull ahead in some benchmarks with P and E, which was extremely crucial for Intel: they couldn't afford to loose the #1 spot.
    Notton said:

    Now, if it's high power and efficient when going full throttle, that I can see being worthwhile.

    My i7-13700K gaming PC does not idle any lower than my former gaming PC with 5800X3D. In my arsenal, the efficiency crown goes to a mini-PC using an i5-12450H, if I don't count the SD7 Gen3+ in my tablet.
    The efficiency crown is a very complex topic, because the energy use of all that stuff beyond the SoC starts to enter the picture, including the power supply itself. And when you also want to allow for high peak performance (and Wattage), the low-end will suffer. And then there is still the purchase price, even if in the long run electricity might be more important, again depending on how you use the machine...
    Reply
  • dalek1234
    You can say that Nova Lake it will have more 3d cache than AMD's "current" CPU's, but Nova Lake will come out around the same time as Zen 6, which MILD leaked that Zen 6 is designed to incorporate as much as 240 MB, so a lot more than Nova Lake will have.

    "Designed" in this case means that they can do this max, or less. Which Zen 6 CPU's will get full 240 MB isn't yet known.
    Reply
  • abufrejoval
    After a few years of having systems with and without V-cache side-by-side, my personal experience is that it's overrated, most of the time.

    Of course I prefer playing at 4k and ultra settings using an RTX 4090 and at that point everybody seems to agree that the main bottleneck is the GPU.

    But I wonder how many mainline gamers will actually care about 400 vs 200FPS?

    The main attraction about buying a V-cache chip was resting assured that you'd get the best no matter what, while it was certainly good enough for a bit of browsing and office work.

    I know, because that's why I bought one, too: a 5800X3D, to replace a 5800X and with a 5950X side by side for some time.

    It's also why I kept buying 5800X3D for the kids even after it had officially become last-gen tech, because it was still in the lead pack and by far good enough as the main bottleneck remained the GPU.

    And when I do a full Linux kernel compile or an Android build, I get a cup of coffee anyway, a few seconds more or less don't really matter, while 8 extra cores mean I won't drink two. As it turned out the 5950X really wasn't bad enough at gaming to notice, but those extra cores were really cheap (at one point in time), and sometimes as useful as V-cache could be as well. So of course I went with a 7950X3D next to get both :)

    So while Intel knew their CPUs were really good enough for gaming even without V-cache, AMD was able to use the same top performer laurels relentlessly against them as Intel had been using their #1 spot to push AMD into 2nd fiddle. And #2 simply isn't a good place to be, as AMD knows full well from long suffering. Running hot and burning down didn't help Intel, either.

    Yet I just can't see Intel claw back that #1 slot even with that cache, because they won't be able to sustain that, given their cost structure. AMD didn't just get where they are today because they managed to beat Intel once: they showed that they could consistently beat Intel generation after generation and even using the same socket for the longest time.

    And they did that at a price that didn't break the bank, I've seen estimates of $20 extra production cost for a V-cache CCD.

    V-cache did as much as double the performance on some specific HPC workloads and I've also heard EDA mentioned. And that's where it originated, the consumer market was a skunkworks project that turned out a gamer crown guarantee, while V-cache EPYCs helped pay for the R&D and production scale.

    And that may be missing again from Intel: the ability to scale their variant of V-cache far and wide for economy, or they just risk doing another Lunar Lake, a great performer with the help of niche technology, but not a money maker across the board because it's too expensive to make.

    What most people do not appreciate is that AMD won and is winning the x86 battle not just on a performance lead, but at price/performance at production. And without similar or even lesser production cost for better performance, Intel doesn't stand a chance catching up.
    Reply
  • TerryLaze
    Notton said:
    Is it just me?
    I don't see the benefit of E cores or LPE cores on a desktop. It's plugged into a wall and why would you let a high power CPU idle? I'd rather have it complete a job faster with more P-cores, especially if it has an extra large L3 cache.

    I can see the benefit on a battery powered device, or a lower power PC that idles alot, like an everyday PC for browsing, email, and youtube, but a high performance desktop? You have to be kidding me.

    Now, if it's high power and efficient when going full throttle, that I can see being worthwhile.

    My i7-13700K gaming PC does not idle any lower than my former gaming PC with 5800X3D. In my arsenal, the efficiency crown goes to a mini-PC using an i5-12450H, if I don't count the SD7 Gen3+ in my tablet.
    The main customer of intel for client PCs are big corporations that have thousands of PCs running all day long and most of the time there is just somebody bored out of their mind just looking at the screen or doing very simple things.
    abufrejoval said:
    After a few years of having systems with and without V-cache side-by-side, my personal experience is that it's overrated, most of the time.

    Of course I prefer playing at 4k and ultra settings using an RTX 4090 and at that point everybody seems to agree that the main bottleneck is the GPU.

    But I wonder how many mainline gamers will actually care about 400 vs 200FPS?

    The main attraction about buying a V-cache chip was resting assured that you'd get the best no matter what, while it was certainly good enough for a bit of browsing and office work.

    I know, because that's why I bought one, too: a 5800X3D, to replace a 5800X and with a 5950X side by side for some time.

    It's also why I kept buying 5800X3D for the kids even after it had officially become last-gen tech, because it was still in the lead pack and by far good enough as the main bottleneck remained the GPU.

    And when I do a full Linux kernel compile or an Android build, I get a cup of coffee anyway, a few seconds more or less don't really matter, while 8 extra cores mean I won't drink two. As it turned out the 5950X really wasn't bad enough at gaming to notice, but those extra cores were really cheap (at one point in time), and sometimes as useful as V-cache could be as well. So of course I went with a 7950X3D next to get both :)

    So while Intel knew their CPUs were really good enough for gaming even without V-cache, AMD was able to use the same top performer laurels relentlessly against them as Intel had been using their #1 spot to push AMD into 2nd fiddle. And #2 simply isn't a good place to be, as AMD knows full well from long suffering. Running hot and burning down didn't help Intel, either.

    Yet I just can't see Intel claw back that #1 slot even with that cache, because they won't be able to sustain that, given their cost structure. AMD didn't just get where they are today because they managed to beat Intel once: they showed that they could consistently beat Intel generation after generation and even using the same socket for the longest time.

    And they did that at a price that didn't break the bank, I've seen estimates of $20 extra production cost for a V-cache CCD.

    V-cache did as much as double the performance on some specific HPC workloads and I've also heard EDA mentioned. And that's where it originated, the consumer market was a skunkworks project that turned out a gamer crown guarantee, while V-cache EPYCs helped pay for the R&D and production scale.

    And that may be missing again from Intel: the ability to scale their variant of V-cache far and wide for economy, or they just risk doing another Lunar Lake, a great performer with the help of niche technology, but not a money maker across the board because it's too expensive to make.

    What most people do not appreciate is that AMD won and is winning the x86 battle not just on a performance lead, but at price/performance at production. And without similar or even lesser production cost for better performance, Intel doesn't stand a chance catching up.
    AMD is not making their CPUs for cheap, last year they had ~12% margin from their desktop department and this year tsmc US is going to be anywhere from 5 to 20% more expensive, amd is one step away from having to pay on top of every desktop CPU they are selling.
    Reply
  • EzzyB
    abufrejoval said:

    But I wonder how many mainline gamers will actually care about 400 vs 200FPS?
    I've wondered about this for some time. The community has largely judged the "best" gaming CPU on benchmarks with a 4090 or 5090 running games at 1080. The FPS in most every game is way off the scale to the point where, even though real games are used, the benchmark is essentially synthetic. No one is going to play CS at 1080 and 600FPS.

    There's a point here somewhere when the human eye can't tell the difference, not sure where that is, for me it seems to be MUCH lower, around 120FPS. That said I talked to a guy on a forum yesterday who was threatening to throw his system in the trash because it would break 130. 😜 Mind you I'm really old and remember when 30 FPS was the holy grail of 3D gaming.
    abufrejoval said:
    So while Intel knew their CPUs were really good enough for gaming even without V-cache, AMD was able to use the same top performer laurels relentlessly against them as Intel had been using their #1 spot to push AMD into 2nd fiddle. And #2 simply isn't a good place to be, as AMD knows full well from long suffering. Running hot and burning down didn't help Intel, either.
    I'd take it one step further. The AMD X3D chips were designed specifically to win those benchmarks. As much as everyone seems to pan the latest intel chips that were actually quite efficient and better performers in most things other than gaming. They were marginally better chips, just without the cache.

    So while I understand the reason Tom's tests CPU's the way they do (under other conditions there just isn't enough separation in performance) it's really not as useful as it could be. I'd like to see them, as you say, also run at 1440/4k ultra with perhaps 1440 being the new "standard". The results would be more useful.
    Reply
  • abufrejoval
    TerryLaze said:
    AMD is not making their CPUs for cheap, last year they had ~12% margin from their desktop department and this year tsmc US is going to be anywhere from 5 to 20% more expensive, amd is one step away from having to pay on top of every desktop CPU they are selling.
    Perhaps AMD is selling desktop parts at low margin, but that scale then still helps them make EPYCs much cheaper than they sell those. It's the 50% server margin market share that has Lisa Su smiling so brightly.

    Intel looses or makes much less on both and TSMC can't ultimately price every CPU maker out of the market, even AI chips need a surviving CPU, Nvidia is working on working on making it Grace.

    Of course TSMC isn't just greedy, they need the money for the next gen and that's the Moore's law discussion, where not everything technically possible is economically viable, ultimately even for the sole surviving #1.
    Reply
  • TerryLaze
    abufrejoval said:
    Perhaps AMD is selling desktop parts at low margin, but that scale then still helps them make EPYCs much cheaper than they sell those. It's the 50% server margin that has Lisa Su smiling so brightly.
    Yeah, it's going to be negative margin with the tsmc US prices. How does negative scale?!
    tsmc US is going to be more expensive, not just for desktop CPUs.....amd servers made in the US could have 20% less margin next round.
    Reply