Asus brings Nvidia’s GB300 Blackwell Ultra “desktop superchip” to workstations — plain-looking desktop more powerful than most server racks, features up to 784GB of coherent memory, 20 PFLOPS AI performance
My gaming PC looks better than that.

Asus has quietly launched the ExpertCenter Pro ET900N G3, one of the first desktop systems powered by Nvidia’s GB300 Grace Blackwell Ultra “desktop superchip.” This marks a notable shift for Nvidia, which has historically reserved its highest-end AI hardware for its own DGX Station lineup. With Blackwell Ultra, the company is partnering with OEMs like Asus, Lambda, and others to bring AI-focused workstations to a broader market.
The GB300 Desktop Superchip combines a Grace ARM-based CPU with Nvidia’s new Blackwell Ultra GPU—the same pairing that powers the DGX Station announced earlier this year at GTC 2025. This platform delivers up to 784 GB of unified LPDDR5X and HBM3E memory and next-gen Tensor Cores with enhanced FP4 precision, making it ideal for AI training, inference, and large-scale model development. Asus’s ExpertCenter mirrors these design principles with 20 PFLOPS of AI performance, ConnectX-8 SuperNIC networking (800 Gb/s) for high-speed scaling, and support for Nvidia DGX OS.
Moreover, this unassuming-looking desktop offers serious expandability: three PCIe x16 slots for additional GPUs or accelerators, three M.2 slots for SSD storage, and power delivery via standard ATX + EPS12V connectors, plus three 12V-2×6 connectors capable of delivering up to 1,800W to the GPU alone. In theory, users could stack additional RTX Blackwell cards to further increase compute power. While Nvidia has not revealed complete GB300 specifications or pricing, SXM-based compute GPUs alone cost tens of thousands of dollars, putting both the DGX Station and ExpertCenter in the five-digit range.
The ExpertCenter G3 arrives at a time when Nvidia’s GB300 Blackwell Ultra is scaling across the AI industry. Dell recently deployed the first GB300 NVL72 clusters at CoreWeave, featuring 72 Blackwell Ultra GPUs and 36 Grace CPUs per rack, with 1.1 exaFLOPS of FP4 inference performance—50% more than the GB200 NVL. Volume shipments of GB300 servers are reportedly expected to ramp up by September 2025. Rumors indicate Nvidia’s decision to reuse the Bianca motherboard design from GB200 has eased supply chain bottlenecks and accelerated production.
Interestingly, Asus also introduced the Ascent GX10, a compact desktop based on the smaller GB10 Grace Blackwell platform. While not as powerful as the GB300-equipped ExpertCenter, the GX10 hints at Nvidia’s intent to bring its AI hardware into more accessible form factors. This launch aligns with Nvidia’s strategy of collaborating with OEMs like Asus, Dell, and Lambda to move beyond the exclusive DGX lineup, making high-performance AI systems available to a broader range of professionals. With the rapid evolution of AI hardware, these partnerships could pave the way for consumer-facing AI workstations, especially as future platforms like Rubin push GPU density, power efficiency, and thermal design to new levels.
With Asus joining forces with Nvidia’s Blackwell Ultra push, the ExpertCenter Pro ET900N G3 is more than just another AI workstation—it signals Nvidia’s growing ambitions beyond GPUs. While AMD and Intel have long dominated the CPU landscape, Nvidia is edging into their territory with the ARM-based Grace processor, built specifically for AI and HPC workloads rather than as a direct x86 competitor. Grace isn’t designed to replace general-purpose CPUs; instead, it complements Nvidia’s GPU dominance by enabling unified, high-bandwidth memory and compute performance that traditional setups can’t easily match. This doesn’t put Nvidia in a head-to-head battle with AMD and Intel—at least not yet—but it does carve out a segment of the market that could gradually erode their share in AI-focused servers and workstations.
Follow Tom's Hardware on Google News to get our up-to-date news, analysis, and reviews in your feeds. Make sure to click the Follow button.
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.

Hassam Nasir is a die-hard hardware enthusiast with years of experience as a tech editor and writer, focusing on detailed CPU comparisons and general hardware news. When he’s not working, you’ll find him bending tubes for his ever-evolving custom water-loop gaming rig or benchmarking the latest CPUs and GPUs just for fun.
-
usertests My gaming PC looks better than that.
I prefer plain looking, metallic or black plastic cases with no glass or RGB.
This is a Strix Halo AI crusher but obviously in a different price class. I'm wondering how well they are doing on the software side compared to AMD. I'll probably take a look at a review.
In 15 years, normal users could have this much memory for $100 if 3D DRAM takes off. Although we will see an incursion of soldered RAM into "desktops" during that time. -
Pierce2623 To call this chip “more powerful than most server racks” , you’d have to be comparing its GPU against CPU-only racks in some GPU optimized AI workloads. Considering nobody uses CPU-only racks for AI, it would be a dumb comparison.Reply -
jessica67 That’s an insane leap 784GB of coherent memory and 20 PFLOPS AI power in a workstation is next-level. Perfect for high-end AI workflows, 3D modeling, or even advanced packaging design like rendering custom 3d card boxes. Curious to see how creative industries leverage this kind of power for both speed and detail in visualizationReply