Report claims Arm chips will power 90% of AI servers based on custom processors in 2029 — x86 and RISC-V on the outside looking in
Should AMD and Intel get more flexible?
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
You are now subscribed
Your newsletter sign-up was successful
Virtually all hyperscale cloud service providers (CSPs), as well as some of the leading developers of AI accelerators nowadays, have their own custom-silicon programs that are focused not only on developing AI accelerators, but also on custom general-purpose CPUs usually based on the Arm instruction set architecture (ISA). Over the next several years proliferation of custom CPUs based on the Arm ISA inside AI servers will increase to 90%, leaving x86 and Arm around 10%, according to Counterpoint Research.
x86 processors from AMD and Intel have long dominated general-purpose servers, which is why most of the AI servers initially relied on Opteron and Xeon processors. However, Arm-based custom CPUs that are tailored for specific data-intensive AI workloads are more cost and power-efficient. Furthermore, given the fact that AI workloads are emerging workloads, backward compatibility with x86 is not vital. To that end, AWS, Google, and Microsoft have developed their own proprietary Arm-based processors for their own workloads, whereas Meta is the alpha customer for Arm's own AGI processor.
As a result, adoption is unfolding across multiple hyperscalers in parallel. AWS is expanding the role of its Graviton processors across Trainium-based systems, while still retaining x86 in some configurations for compatibility reasons; Google's next-generation TPU infrastructure relies on its Axion Arm CPU; while Microsoft has paired its Azure Cobalt Arm CPU with its Maia accelerators from the beginning to build a vertically integrated AI infrastructure. Meta is also set to begin deploying Arm's own AGI CPUs shortly.
Article continues below"The transition from x86 to Arm in AI servers is not a single switch," said Neil Shah, vice president of research at Counterpoint Research. "It has played out generation by generation, configuration by configuration. Hyperscalers are making deliberate choices based on their specific deployment needs, writing compatible and interoperable software, and the economics are very encouraging. The transition is expected to accelerate meaningfully in the second half of 2026, driven by the broad deployment of in-house Arm CPUs alongside next-generation ASIC platforms across major hyperscalers."
Nowadays, the majority of CPUs powering AI servers are still x86, but this is going to change shortly, and by 2030, 90% of AI servers that use custom processors will rely on Arm, leaving only 10% for x86 and RISC-V. It should be noted that loads of AI servers will continue to rely on off-the-shelf EPYC and Xeon processors from traditional suppliers, though broad adoption of Arm by hyperscalers for their custom silicon programs should be a signal for AMD and Intel to make their custom CPU programs more appealing to customers.
"Our analysis projects Arm-based CPUs will account for at least 90% of host CPU deployments in custom AI ASIC servers by 2029, up from around 25% in 2025, a structural shift driven by the accelerating rollout of in-house Arm CPU programs across major hyperscalers," Shah added.
AMD builds its own vertically integrated AI platforms featuring x86 EPYC processors, Instinct MI-series AI accelerators, Pensando DPUs, and Pensando NICs, so it is reasonable to assume that these CPUs are tailored for AI workloads. Meanwhile, Intel is developing custom Xeon processors for Nvidia's next-generation AI platforms, which suggests that these processors will also be optimized primarily for AI workloads. All in all, while Arm will get significantly bigger in the AI server realms over the next four to five years, x86 will continue to command a sizeable share of this market.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.
-
hotaru251 AMD's never been against making arm chips. (in fact they had done so in past)Reply
They just havent focused on it atm because x86 is still king. -
usertests Reply
I've wondered if they'll ever try designing an ARM-x86 hybrid chip for some purpose. Now IBM is trying an ARM-Power hybrid.hotaru251 said:AMD's never been against making arm chips. (in fact they had done so in past)
They just havent focused on it atm because x86 is still king. -
hotaru251 Reply
hybrid? prolly not as if arm gets adopted enoguh they'd likely just go full in arm.usertests said:I've wondered if they'll ever try designing an ARM-x86 hybrid chip for some purpose. Now IBM is trying an ARM-Power hybrid.
Apple's shown you can use translation layers well enough to run x86 on arm. -
Notton https://www.notebookcheck.net/Zen-architecture-pioneer-Jim-Keller-feels-AMD-was-stupid-to-cancel-the-K12-Core-ARM-processor.629843.0.html"Jim's plan with the K12 was to work on a new decode unit since the cache and execution unit design for ARM and x86 were almost similar, but AMD had other plans after he left."Reply
The impression I get from that is ARM/x86 hybrid doesn't make much sense if every translation can be done through software. At least when it comes to Zen architecture. -
thestryker This really doesn't seem like an x86 v Arm thing so much as general purpose versus semi-custom. Most of the designs we've seen have been modified Arm cores rather than ground up. This probably explains the shift towards Arm rather than RISCV. It seems unlikely that nvidia would dump money into Intel and license nvlink for use in Xeons if they thought that market wouldn't be relevant a couple of years after first productization.Reply -
bit_user Reply
No, those aren't POWER. They're Telum mainframe CPUs. Totally different ISA. Yes, IBM has two proprietary ISAs.usertests said:I've wondered if they'll ever try designing an ARM-x86 hybrid chip for some purpose. Now IBM is trying an ARM-Power hybrid.
You don't know the mainframe world. These folks care about legacy in ways that put shame to x86.hotaru251 said:hybrid? prolly not as if arm gets adopted enoguh they'd likely just go full in arm. -
bit_user Reply
Then why does Zen 5 still use 4-wide decoders, while the latest ARM P-cores are 10+ wide? Even Intel's ginormous P-cores are only 8-wide, and not all of those 8 are fully general.Notton said:The impression I get from that is ARM/x86 hybrid doesn't make much sense if every translation can be done through software. At least when it comes to Zen architecture.
In fact, the overhead of decoding ARM64 instructions is so low that the ARM cores which dropped 32-bit support no longer even have mOP caches, which are functionally redundant with L1i caches. Instead, they could spend that silicon and power budget on wider decoders and more pipelines. -
bit_user Reply
That's not true. Amazon, Google, Microsoft, and Nvidia (prior to Vera) all used off-the-shelf cores, for their server CPUs. Yes, they packaged them up on their own, but they had no viable alternative to doing so. Only Nvidia did any real value-add by integrating NVLink into their own silicon.thestryker said:This really doesn't seem like an x86 v Arm thing so much as general purpose versus semi-custom. Most of the designs we've seen have been modified Arm cores rather than ground up.
Going forward, with ARM providing its own silicon, you'll be able to read much more into companies' decision either to use it or continue doing their own chip-making. However, once Ampere's Altra fell into obsolescence, you could no longer read into anyone's decision not to use it.
I think the two are separate. Nvidia invested in Intel for their fabs, alone.thestryker said:It seems unlikely that nvidia would dump money into Intel and license nvlink for use in Xeons if they thought that market wouldn't be relevant a couple of years after first productization.
The NVLink partnership was probably done for two reasons:
To give Intel a vital lifeline, complementing the monetary investment.
To deny AMD ownership of the x86 segment, in the the AI hardware stack. -
thestryker Reply
All but one generation of nvidia x86 racks were Intel so it being a lifeline doesn't really make sense. Enough nvidia customers still want x86 racks that integrating nvlink is likely to compete directly with AMD's forthcoming x86 racks. It very much does not make sense to push for nvlink in Xeons if the market won't be there though.bit_user said:The NVLink partnership was probably done for two reasons:
To give Intel a vital lifeline, complementing the monetary investment.
To deny AMD ownership of the x86 segment, in the the AI hardware stack.
I don't really think the two are all that separate. It was a smart time to dump money into Intel (financially speaking) and a great way to get nvlink into Xeons to compete with AMD. In fact I'd bet it was more political theater than any practicality with regards to the fabs. Intel's board nuked what would have been required for them to be a real alternative any time soon and there's no world in which Jensen didn't know that.bit_user said:I think the two are separate. Nvidia invested in Intel for their fabs, alone.
Every company other than Meta named in this article is using Arm cores with their own proprietary features (just nowhere near as major as the Vera modifications) for their latest CPUs. Historically neither AMD or Intel have had the flexibility to run semi custom lines just for specific customers. I do agree completely that the direction will be born out over the next year or so as we see if companies keep with their own or shift towards Arm (or even nvidia since they've said they want this to be a business unit).bit_user said:That's not true. Amazon, Google, Microsoft, and Nvidia (prior to Vera) all used off-the-shelf cores, for their server CPUs. Yes, they packaged them up on their own, but they had no viable alternative to doing so. Only Nvidia did any real value-add by integrating NVLink into their own silicon.
Going forward, with ARM providing its own silicon, you'll be able to read much more into companies' decision either to use it or continue doing their own chip-making. However, once Ampere's Altra fell into obsolescence, you could no longer read into anyone's decision not to use it. -
bit_user Reply
Nope. ARM flatly forbids that. If you want to modify the ISA in any way, you'd have to go with RISC-V.thestryker said:Every company other than Meta named in this article is using Arm cores with their own proprietary features
That's not true, either. Intel has been making customer-specific Xeon variants since Broadwell.thestryker said:Historically neither AMD or Intel have had the flexibility to run semi custom lines just for specific customers.
AMD did a version of EPYC with HBM for Microsoft, although it seems that was a hybrid CPU+GPU and they just disabled the GPU chiplets.