Intel Plays Defense: Inside Its EPYC Slide Deck

Intel, like many other vendors, holds press workshops before key product releases. The company invited several publications and analysts, Tom's Hardware among them, to its Jones Farm campus in Hillsboro, Oregon for two days of marathon briefings. These included 15 sessions, 10 slide decks, and 365 slides outlining nearly every detail of its new Xeon Scalable Processor family.

It was an almost overwhelming amount of information to sift through. In our quest to cram as much relevant information as possible into our coverage, however, we considered it much-needed information.

One presentation stuck out more than the rest. Intel presented a deck that outlined what it considers to be its advantages against AMD’s EPYC CPUs. The slides generated a lot of controversy over the last week, but they haven't been presented in context. We’re going to fix that. But first, some background:

Competition Heats Up

AMD was last competitive in the server space around five years ago, which allowed Intel to gobble up ~99.6% of the market. EPYC has the potential to change this by virtue of its strong performance, scalability, aggressive pricing, and less confusing segmentation than Intel's Xeon line-up.

Most analysts surmise that AMD’s latest and greatest poses little short-term threat to Intel’s data center dominance. The conservative enterprise is notoriously slow to adopt unproven designs, and that means the safe money is still on Xeon. It will take time for AMD to reclaim more than a single-digit share of the server space. The company knows this.

Aside from market share, AMD poses a larger threat to Intel’s margins, which can exceed 60%. By strategically snipping features from various models in the Xeon portfolio, Intel is able to maximize the profit it earns across its product stack. Core count, clock rates, memory handling, compute functionality, threading, scalability, and manageability are all used to create unique SKUs with price points to match the features that get turned on.

Intel’s MSRPs are largely irrelevant to its largest customers, some of which are commonly referred to as the Super Seven+1: Google, Facebook, Amazon, Microsoft, Baidu, Alibaba, Tencent, and AT&T. These companies purchase CPUs in high volume and often have access to new processors months in advance of the official launches. They also don’t pay Intel’s official prices. The same goes for other large customers, such as Dell/EMC and other OEMs.

Truth be told, it’s hard to negotiate with a company that essentially controls the world's data centers. Companies commonly hammer out press releases claiming they're rolling out alternative platforms, such as those powered by ARM processors. But many of these are ultimately regarded as a tactic to remind Intel there are other options. After all, while it is possible to switch to ARM, that architecture doesn't support x86 without some sort of emulation. This presents significant technical challenges.

EPYC changes the game. During AMD's launch event, representatives from several major companies took to the stage and expressed support for the platform. Baidu, Microsoft, Supermicro, Dell, Xilinx, HPE, Dropbox, Samsung, and Mellanox were all there. Notice the Super Seven+1 members? Surely there are other high-profile names being courted behind the scenes, so we expect more partner announcements in the future. We can't overstate the importance of OEMs like Dell and HPE, but Sugon also clears the path to the burgeoning ODM market. Xilinx and Mellanox are key partners that might help offset Intel's goals with Purley's integrated networking and FPGA features, and the Azure tie-up portends penetration into cloud-based deployments.

We also see that AMD specifically calls out "harnessing the power of the x86 ecosystem." To that effect, the company lined up a strong roster of hypervisor/operating system and developer tools partners. VMware, Microsoft's server division, and Red Hat also took to the stage at AMD's event.

EPYC also does away with some of Intel's segmentation practices. AMD only manipulates core count, clock rates, and multi-socket support to break up its portfolio. That means customers still get simultaneous multi-threading, along with all of the architecture's PCIe lanes and unaltered memory capacity/speed support, even from the least-expensive models. In short, EPYC offers more connectivity across the board and simpler (purportedly cheaper) motherboards. Instead of "buying up" with Intel for one crucial feature, there are now less expensive alternatives, some of which revolve around AMD's single-socket server strategy.

These CPUs are a threat to Intel's margins because they give Xeon customers another option. Consequently, Intel might have to get more price-competitive in key portions of its product stack, especially with high-volume customers. That means EPYC could affect Intel's bottom line, even if it doesn't gain significant market share.

There is little doubt that AMD's EPYC will find some measure of success in the data center, and Intel wants to get ahead of any potential adoption. Like most companies, Intel does its own research to gauge the positioning of competitors. Typically, though, the press isn't privy to such defensive documentation. But one of the slide decks we saw at Intel's recent press workshop outlined what Intel feels are the strengths of Xeon compared to the weaknesses of AMD's EPYC. This presentation is generating quite a bit of criticism online. So let's see what Intel had to say...

MORE: Best CPUs

MORE: Intel & AMD Processor Hierarchy

MORE: All CPUs Content

Create a new thread in the US Reviews comments forum about this subject
33 comments
    Your comment
    Top Comments
  • deepblue08
    All I see is cheap-shots, kind of low for Intel.
    27
  • bloodroses
    Just like every political race, here comes the mudslinging. It very well could be true that Intel's Data Center is better than AMD's Naples, but there's no fact from what this article shows. Instead of trying to use buzzwords only like shown in the image, back it up. Until then, it sounds like AMD actually is onto something and Intel actually is scared. If AMD is onto something, then try to innovate to compete instead of just slamming.
    22
  • Evil Google
    TL;DR Intel crapping their panties.
    19
  • Other Comments
  • Evil Google
    TL;DR Intel crapping their panties.
    19
  • Aspiring techie
    This is something I'd expect from some run-of-the-mill company, not Chipzilla. Shame on you Intel.
    12
  • deepblue08
    All I see is cheap-shots, kind of low for Intel.
    27
  • bloodroses
    Just like every political race, here comes the mudslinging. It very well could be true that Intel's Data Center is better than AMD's Naples, but there's no fact from what this article shows. Instead of trying to use buzzwords only like shown in the image, back it up. Until then, it sounds like AMD actually is onto something and Intel actually is scared. If AMD is onto something, then try to innovate to compete instead of just slamming.
    22
  • redgarl
    LOL... seriously... track record...? Track record of what? Track record of ripping off your customers Intel?

    Phhh, your platform is getting trash in floating point calculation... 50%. And thanks for the thermal paste on your high end chips... no thermal problems involved.
    12
  • InvalidError
    Anonymous said:
    All I see is cheap-shots, kind of low for Intel.

    To be fair, many of those "cheap shots" were fired before AMD announced or clarified the features Intel pointed fingers at.

    That said, the number of features EPYC mysteriously gained over Ryzen and ThreadRipper show how much extra stuff got packed into the Zeppelin die. That explains why the CCXs only account for ~2/3 of the die size.
    2
  • redgarl
    To Intel, PCIe Lanes are important in today technology push... why?... because of discrete GPUs... something you don't do. AMD knows it, they knows that multi-GPU is the goal for AI, crypto and Neural Network. This is what happening when you don't expend your horizon.

    It's taking us back to the old A64.
    4
  • Yuka
    It's funny...

    - They quote WTFBBQTech.
    - Use the word "desktop die" all over the place without batting an eye on their own "extreme" platform being handicapped Xeons.
    - No word on security features. I guess omission is also a "pass" in this case.

    This reads more like a scare threat to all their customers out there instead of trying to sell a product. Miss Lisa Su is doing a good job it seems.

    Cheers!
    18
  • InvalidError
    Anonymous said:
    - Use the word "desktop die" all over the place without batting an eye on their own "extreme" platform being handicapped Xeons.

    The extra server-centric stuff (crypto superviser, the ability for PCIe lane to also handle SATA and die-to-die interconnect, the 16 extra PCIe lanes per die, etc.) in Zeppelin didn't magically appear when AMD put EPYC together... so technically, Ryzen chips are crippled EPYC/ThreadRipper dies.
    5
  • Yuka
    Anonymous said:
    Anonymous said:
    - Use the word "desktop die" all over the place without batting an eye on their own "extreme" platform being handicapped Xeons.

    The extra server-centric stuff (crypto superviser, the ability for PCIe lane to also handle SATA and die-to-die interconnect, the 16 extra PCIe lanes per die, etc.) in Zeppelin didn't magically appear when AMD put EPYC together... so technically, Ryzen chips are crippled EPYC/ThreadRipper dies.


    I don't know if you're agreeing or not... LOL.

    Cheers!
    0
  • InvalidError
    Anonymous said:
    I don't know if you're agreeing or not... LOL.

    Just pointing out that the "cut down chip" argument works both ways.
    -2
  • the nerd 389
    One reason that Intel didn't mention TDPs is that Intel CPUs can drastically overshoot their TDP if subjected to, say, an AVX-512 workload. AMD CPUs are more consistent under load, albeit with a higher nominal power usage.

    Imagine for a moment: Intel attempting to pitch an unpredictable TCO to a datacenter that encounters a worst-case situation only when using one of the most attractive features of the product. That won't end well for Intel, and I suspect they know it. Especially in IaaS applications.

    It's also worth mentioning that there's no standard for determining TDPs, and varies from manufacturer to manufacturer.
    8
  • Steve_104
    Great article Mr. Alcorn,

    I would point you to Anandtech's power related portion of their review, I would place even odds on Intel not bringing up TDP being the real world results on those numbers. Those numbers didn't look that rosy for Intel, to me, in Anandtech's very rapidly written and published early comparison review.

    It would appear to me, that Intel's PR department is not very confident in their product from presentation of many of their slides.

    Can not wait to get my mitts on these new chips (of both flavors) myself.
    4
  • Rookie_MIB
    One thing that I find interesting is that they didn't really address thermals very well, perhaps because the 4 die design has a distinct advantage over the monolithic design.

    With some space between the dies as in the Epyc CPU - you've spread out the thermal load (not a lot, but somewhat) which combined with the soldered IHS should help the Epyc to run cooler.

    Meanwhile, you have 18 or so concentrated cores in Intels monolithic design and while that might help with intercore latency, it doesn't do much good if the CPU has to run slower because of thermal limits.

    Also, according to the Anandtech tests, the AMD Epyc 7601x2 vs Intel Xeon 8176x2, Idle power usage was 151w for the Epyc system, vs 190w for the Xeon system. Under MySQL loads, 321w for Epyc, 300w for Xeon. Under POVRay - 327 for Epyc vs 453 (!!!) for Xeon.

    All in all, being +/- 20w isn't too bad, but that 120w margin for Xeon in the POVRay testing was rather surprising.
    9
  • none12345
    You forgot to mention the embaressment on the ecosystem slide. Where they have duplicated multiple vendors to make it look like they have more then they do. Granted, this is rather meinglesss, but its quite embarsing that they did it.

    On the L3 near vs far metric. Its only fair to mention the fact that there are near and far L3 caches on the xenon as well. So it will also be a concern of intel chips. Its going to take 9 link hops to get from the core in the upper left to get down to the lower right. Thats going to add a lot of latency.

    The other concern with the mesh network. Is routing between cores. Im just going to assume intel has done their homework here and is routing intellegently. But if they havent, there will be bottle necks arround the memory controllers and in the center of the chip.

    Again its another consistency issue that will have to be watched.

    Neither Intel's nor AMD's design is bad, they both have trade offs. Which is to be expected with this level of scaling.
    4
  • none12345
    Forgot to mention. On the virtualization segmentation. You could do the exact same thing on intel chips. What happens with a 30/32/34/36 core VM, it shares 2 sockets, would destroy performance. All of those would fit in a single socket on AMD.

    Not trying to dismiss the issue tho, just saying its an issue on both platforms.

    You gotta tune your workload to your hardware.
    5
  • Knowbody42
    I suspect Nvidia would prefer to use EPYC over Xeon, due to the larger number of PCI-E lanes. And Nvidia has been making a big push to get their GPUs into data centres.
    2
  • PaulAlcorn
    Anonymous said:
    You forgot to mention the embaressment on the ecosystem slide. Where they have duplicated multiple vendors to make it look like they have more then they do. Granted, this is rather meinglesss, but its quite embarsing that they did it.

    On the L3 near vs far metric. Its only fair to mention the fact that there are near and far L3 caches on the xenon as well. So it will also be a concern of intel chips. Its going to take 9 link hops to get from the core in the upper left to get down to the lower right. Thats going to add a lot of latency.

    The other concern with the mesh network. Is routing between cores. Im just going to assume intel has done their homework here and is routing intellegently. But if they havent, there will be bottle necks arround the memory controllers and in the center of the chip.

    Again its another consistency issue that will have to be watched.

    Neither Intel's nor AMD's design is bad, they both have trade offs. Which is to be expected with this level of scaling.


    We mentioned the duplicated vendors at the top of the last page.
    1
  • mapesdhs
    Anonymous said:
    We mentioned the duplicated vendors at the top of the last page.


    Equally stupid was the implication that AMD hasn't been talking to these companies. It's by far the worst PR nonsense from Intel I've ever seen. The whole thing reads like some kid in a schoolyard yelling, yeah, well my dad could beat up your dad! Nuuuhhh! Unbelievable. Linus Tech Tips covered it here:

    https://youtu.be/f8sXQ6JsNu8?t=20m23s

    Ian.
    1
  • Yuka
    Anonymous said:
    Anonymous said:
    I don't know if you're agreeing or not... LOL.

    Just pointing out that the "cut down chip" argument works both ways.


    Indeed it does. Thing is, Intel is the one pointing it out here, not AMD. At least, so far, I haven't seen any AMD presentation with this level of snarky remarks.

    I mean, I have to admit I did had a chuckle at "glued dies", because, ironically enough, we all called that the Pentium Ds and C2Q's back then. Intel was just relieving all that accumulated anger from those years when AMD had the "real" dual and quad core variants.

    Cheers!
    2