Marvell’s $5.5B Celestial AI acquisition expands its role in AI data center hardware — firm now positioned to deliver next-gen optical interconnects

MEMBER EXCLUSIVE
A 'Marvell' sign sits in front of its offices.
(Image credit: Getty Images)

Marvell has confirmed plans to acquire Celestial AI in a deal worth up to $5.5 billion, a figure that immediately places it among the most aggressive acquisitions any mid-tier silicon vendor has made in the current AI cycle, and marks a decisive shift in how the company intends to compete against Nvidia, AMD, and Intel in the rest of the decade.

Celestial AI spent much of the last four years building a photonic interconnect platform intended to deliver high-bandwidth communication among accelerators without the electrical-signal penalties that limit today’s GPU-dense racks.

Higher bandwidth and larger memory pools

Nvidia

(Image credit: Nvidia)

Training and inference clusters built around accelerators such as Nvidia Blackwell are fundamentally constrained by bandwidth. Even with advanced SerDes and high-speed electrical links, the growth of model sizes is pressing against the limits of conventional interconnect design. Celestial’s approach replaces long-reach electrical paths with photonic waveguides that can sustain high throughput at lower power, while maintaining signal integrity across the many parallel channels now required by large-model workloads.

Marvell already supplies cloud providers with optical DSPs, PAM4 transceivers, and network-interface controllers. Bringing Celestial’s optical compute-fabric architecture into that portfolio creates an opportunity to span the hierarchy from server-rack switching down to chip-adjacent links. If the company can commercialize Celestial’s technology at scale, it could offer an alternative path to deploy large accelerator clusters without relying solely on GPU-centric fabrics. That shifts the company from a supplier of supporting components into a core enabler of system architecture.

Celestial’s technology also intersects directly with the memory debate in AI. Today’s accelerators hinge on ever-larger pools of HBM, a trend that pushes cost, thermals, and packaging complexity upward. Photonic interconnects promise to extend memory coherence over greater distances, allowing external memory and disaggregated resources to behave more like local pools. It is a best-case scenario that depends on manufacturing maturity, but it hints at why Marvell is investing so heavily. If coherent optical fabrics reach production scale, they could change the balance between compute and memory in future data center designs.

Positioning against Nvidia, AMD, and Intel

For all its ambition, Marvell faces a steep integration challenge. Nvidia’s dominance comes from tight control of hardware, software, networking, packaging, and system-level deployments. Every new GPU generation extends that advantage by increasing the pull of CUDA and NVLink.

AMD’s MI300 leverages unified high-bandwidth HBM memory to give CPU and GPU cores shared access to the same memory pool. In theory, this simplifies data movement and avoids explicit copies between host and device. Meanwhile, Intel’s strategy around Gaudi depends on creating price-efficient training nodes and leveraging its packaging scale, even as its photonics roadmap remains in flux. Falcon Shores, a GPU for AI and HPC, was canceled in June in favor of preparation for a new rack-level design, Jaguar Shores.

AMD

(Image credit: AMD)

Marvell cannot match those companies with accelerators of its own, and the deal does not suggest the company intends to build one. Instead, the acquisition strengthens its position as an independent connective vendor in a market where accelerators need increasingly specialized fabrics to operate at scale. The idea seems to be that hyperscalers will continue designing custom silicon for training and inference, and that those systems will require neutral suppliers of the optical and electrical pathways that tie them together.

Nvidia’s own work in optics demonstrates the same trend. Its work on optical NVLink and PCIe successors points toward a future where bandwidth constraints limit cluster scale long before compute limits do. Marvell’s purchase of Celestial could therefore be seen as an understanding that the necessary solutions will not come solely from GPU or CPU vendors, and that the interconnect layer is becoming even more of a focal point in modern data center architectures.

Optical fabrics could reshape next-gen AI systems

The practical impact of this deal depends on how quickly Marvell can move Celestial’s technology from prototype to production. Factors like fabric maturity, packaging workflows, reliability under data center thermals, and integration with existing networking stacks all determine whether optical compute fabrics can be adopted at scale.

If Marvell delivers, hyperscalers could deploy racks in which memory expansion, compute scaling, and multi-node communication rely more heavily on photonics. That would reduce the power lost to electrical I/O and ease congestion in high-density GPU configurations. It could also feasibly allow disaggregated clusters to operate more like monolithic systems.

Multi-billion-dollar acquisitions in the semiconductor sector usually track emerging choke points. Mellanox, for example, addressed network performance when distributed training began to break existing fabrics. Pensando addressed per-node networking and security. Celestial AI fits a pattern in which bottlenecks have shifted onto chip-to-chip communication and energy costs tied to electrical bandwidth.

If those bottlenecks dominate the next five years, Marvell has just bought one of the few companies attempting to solve them with dedicated hardware rather than iterative tweaks to existing designs.

Luke James
Contributor

Luke James is a freelance writer and journalist.  Although his background is in legal, he has a personal interest in all things tech, especially hardware and microelectronics, and anything regulatory.