UALink has Nvidia's NVLink in the crosshairs — final specs support up to 1,024 GPUs with 200 GT/s bandwidth

Google
(Image credit: Google)

One of the key aims of UALink is enabling a competitive connectivity ecosystem for AI accelerators that will rival Nvidia's established NVLink technology that enables the green company to build rack scale AI-optimized solutions, such as Blackwell NVL72. With the emergence of UALink 1.0, companies like AMD, Broadcom, Google, and Intel will also be able to build similar solutions, using industry-standard technologies rather than Nvidia's proprietary solutions, which means lower costs.

The Ultra Accelerator Link Consortium on Tuesday officially published the final UALink 1.0 specification, which means that members of the group can now proceed with tape outs of actual chips supporting the new technology. The new interconnect technology targets AI and HPC accelerators and is supported by a broad set of industry players — including AMD, Apple, Broadcom, and Intel. It promises to become the de facto standard for connecting such hardware.

The UALink 1.0 specification defines a high-speed, low-latency interconnect for accelerators, supporting a maximum bidirectional data rate of 200 GT/s per lane with signaling at 212.5 GT/s to accommodate forward error correction and encoding overhead. UALinks can be configured as x1, x2, or x4, with a four-lane link achieving up to 800 GT/s in both transmit and receive directions.

One UALink system supports up to 1,024 accelerators (GPUs or other) connected through UALink Switches that assign one port per accelerator and a 10-bit unique identifier for precise routing. UALink cable lengths are optimized for <4 meters, enabling <1 µs round-trip latency with 64B/640B payloads. The links support deterministic performance across one to four racks.

(Image credit: UALink)
Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • bit_user
    It'll be interesting to see just how long Nvidia keeps its lead in AI. I think others don't need to beat them on performance, but just come close, offer similar scalability, and beat them on perf/$. With UALink, that's one less point on the board in Nvidia's favor.
    Reply
  • atomicWAR
    It be nice to see someone, anyone, humble Nvidia a little bit as their greed knows no bounds. Hopefully with more competition Nvidia will need to re-evaluate it's pricing in both AI and gaming workloads. The AI bubble won't last forever and Nvidia has to know that.

    bit_user said:
    It'll be interesting to see just how long Nvidia keeps its lead in AI. I think others don't need to beat them on performance, but just come close, offer similar scalability, and beat them on perf/$. With UALink, that's one less point on the board in Nvidia's favor.

    This has been on my mind a lot as of late. Nvidia had so much success over the years I feel like they got overly confident. Their prices such seem to indicate as much. I suspect someone will challenge them soon in AI that costs them market share. But I could be wrong. Intel led in the CPU server/consumer space for decades before their foundry issues tripped them up at the same time AMD really began to challenge them again. After AMDs x64 coup d'etat moment that helped sink Itanium with the introduction of Opteron CPUs (Athlon 64 CPUs for consumer), Bulldozer-> Excavator did a lot of damage to their marketshare/mindshare. Point being no telling how long Nvidia might hold their king of the hill position. It could be a couple years or a couple decades. Either way they need to be humbled in a big way soon for the good of consumers. Their pricing is getting a *little* <cough cough> out of control.
    Reply
  • valthuer
    atomicWAR said:
    Either way they need to be humbled in a big way soon for the good of consumers. Their pricing is getting a *little* <cough cough> out of control.

    Yeah, I wouldn’t mind Nvidia having another DeepSeek moment. Consumers would definitely benefit from that.
    Reply
  • bit_user
    atomicWAR said:
    Point being no telling how long Nvidia might hold their king of the hill position. It could be a couple years or a couple decades. Either way they need to be humbled in a big way soon for the good of consumers. Their pricing is getting a *little* <cough cough> out of control.
    Nvidia needs to move beyond the general-purpose compute architectures they've been using for AI. How quickly and successfully they do that will help determine their ability to stay on top. They have their NVDLA NPUs for doing inference workloads in their embedded SoCs. So, they do "get it".

    They're also becoming heavily power/cooling-limited. That's adding to the cost of their solutions and could add more delays (there's some suggestion it was part of Blackwell's holdups). So, we might be nearing a point where they stumble in the face of a more efficient-dataflow architecture, like most of the other NPUs out there.

    One thing I can say with a fair degree of certainty: it doesn't seem like AMD will be the one to usurp Nvidia's dominance in AI. AMD is making its usual mistake of trying to beat Nvidia at its own game and they're fumbling the ball quite badly.
    https://semianalysis.com/2024/12/22/mi300x-vs-h100-vs-h200-benchmark-part-1-training/
    Maybe UDNA will be a game changer. I wouldn't bet on it, but we'll see.
    Reply