Samsung joins Nvidia's NVLink Fusion program to produce custom AI chips — additional Spectrum-X platform to be deployed by Meta and Oracle for data center use
NVLink’s reach goes beyond TSMC, deepening Nvidia’s control over AI system design.

On October 13 at the OCP Global Summit, Nvidia made a series of announcements that demonstrate just how seriously it’s trying to push into the physical supply chain that underpins AI compute. Chief among them was the announcement that Samsung Foundry has joined NVLink Fusion, the program that allows partners to build semi-custom processors wired directly into Nvidia’s proprietary interconnect fabric.
Under the deal, Samsung will offer “design-to-manufacturing” support for companies designing custom CPUs and accelerators that integrate Nvidia’s NVLink-C2C interface. That means a customer working with Samsung can now license Nvidia’s IP, tape out a chip on a Samsung process, and plug it straight into an Nvidia-powered rack. For a company that has spent years relying on TSMC’s silicon, this new partnership illustrates that Nvidia wants multiple foundry footholds for the next phase of AI buildout.
Samsung’s foothold in Nvidia’s AI ecosystem
Samsung’s entry into NVLink-Fusion gives Nvidia a second major manufacturing partner after TSMC, broadening the program’s appeal to companies that prefer to work outside Taiwan or want access to Samsung’s advanced 3nm and 2nm nodes. It also marks a subtle shift in power dynamics. By licensing NVLink-C2C and NVLink chiplets to external foundries, Nvidia is effectively turning its interconnect into a platform.
In practice, Samsung will act as a one-stop shop for NVLink-ready custom chips — from design and verification through to packaging and integration. That could include domain-specific accelerators, AI inference ASICs, or regionally tailored CPUs for hyperscale data centers. The goal isn’t to outsource Nvidia’s own GPUs, but to expand the company’s architecture outward; even chips it doesn’t design could still be part of the NVLink universe.
This matters because NVLink has become the connective tissue of Nvidia’s AI dominance. Its cache-coherent chip-to-chip interface delivers vastly higher bandwidth and lower latency than PCIe — up to 14 times faster in its latest generation. Grace Hopper, the superchip that anchors Nvidia’s AI servers, was among the first to use NVLink-C2C. NVLink Fusion takes that same idea and opens it to the wider ecosystem, allowing others to fuse their silicon directly with Nvidia’s.
By doing so, Nvidia gains the ability to set the terms of integration across an expanding hardware space. Custom chips, alternative CPUs, and even rival accelerators can all coexist, provided that they plug into Nvidia’s proprietary fabric. It’s vertical integration by proxy, ensuring that every new entrant ultimately strengthens Nvidia.
Spectrum-X
The Samsung announcement was only one part of Nvidia’s first day at OCP. The company also confirmed that Meta and Oracle will deploy its Spectrum-X Ethernet IP, a networking platform built specifically for AI data centers. Meta will integrate Nvidia’s Spectrum-4 switches with its open-source FBOSS stack inside next-generation Minipack3N hardware, while Oracle plans to use Spectrum-X to interconnect “giga-scale” AI clusters based on Nvidia’s upcoming Vera Rubin architecture.
Together, these deals underline Nvidia’s ambition to take ownership of not just compute but also the entire AI network that binds servers together. Wells Fargo analysts estimate that Spectrum-X revenue alone has surpassed a $10 billion annualized run rate, growing faster than Nvidia’s GPU business on a percentage basis. The firm notes that Spectrum-X now accounts for the majority of Nvidia’s networking revenue, which hit $7.25 billion last quarter — up 98% year-on-year.
Wells Fargo) NVIDIA: Meta and Oracle to Use NVIDIA Spectrum-X Ethernet; Concerns About ANET Are OverblownOur ViewThis morning, Meta and Oracle announced they will each use NVIDIA’s Spectrum-X Ethernet for the Facebook Open Switching System (FBOSS) OS and for giga-scale AI… pic.twitter.com/ofsxeRsVMUOctober 14, 2025
Those numbers explain why Nvidia is doubling down on Ethernet. Spectrum-X is AI-optimized, designed to deliver 95% utilization across vast GPU clusters where generic Ethernet might only hit 60%. Wells Fargo describes it as “AI-optimized Ethernet over generic off-the-shelf Ethernet,” citing a 1.9 times performance advantage in cross-datacenter communication.
Even so, analysts caution against assuming Spectrum-X will crush the competition. Wells Fargo’s note argues that concerns over Arista Networks are overblown, pointing out that Arista remains embedded in Meta’s leaf-spine architecture and is already co-developing the “Scale-Up Ethernet” framework with Broadcom at OCP.
Vertical power and foundry diversification
Looked at together, Samsung’s arrival and Spectrum-X’s expansion point to the same endgame: Nvidia is looking to build a self-reinforcing ecosystem where every layer revolves around its technology. It’s the same strategy that has defined its GPU dominance, now extended to the company's data center efforts' structural DNA.
This isn’t without precedent. As we reported on October 8, Nvidia has been hard at work, blurring the lines between chipmaker and financier, with alleged GPU purchases for customers through equity and leaseback schemes. Its new foundry partnerships take that logic deeper into the supply chain, ensuring that even if manufacturing shifts geographically or politically, the intellectual property — the NVLink fabric itself — remains Nvidia’s to control.
Samsung, for its part, gains validation and diversification. Its most advanced nodes have trailed TSMC’s in yield and adoption, but being tapped as an NVLink partner gives it a way in to the most lucrative segment of semiconductor manufacturing, custom AI silicon. It also positions Samsung as a potential counterweight in the foundry race, especially if US-China tensions or Taiwan supply constraints make a second source more valuable to hyperscalers.
Follow Tom's Hardware on Google News, or add us as a preferred source, to get our latest news, analysis, & reviews in your feeds.

Luke James is a freelance writer and journalist. Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.