PCIe 6.0 over optical cables demonstrated in custom data center solution
Nubis and Alphawave demo PCIe 6.0 x16 link without retimers.
Nubis Communications and Alphawave Semi have teamed up to showcase a PCIe 6.0 interconnection using an optical link. The demonstration used Alphawave's PCIe 6.0 controller and a Nubis linear optical engine, primarily meant to showcase ability of the companies to enable next-generation data center connectivity at 64 GT/s.
The demonstration features an Alphawave Semi PCIe subsystem based on the PiCORE Controller IP and PipeCORE PHY that drives and receives PCIe 6.0 traffic through a Nubis XT1600 linear optical engine. This setup achieves PCIe 6.0 x8 optical link at 64 GT/s per fiber, without retimers. The demonstration showcases the technical viability and high-speed capabilities of this custom PCIe over Optics solution.
"Our high level of integration with 16 lanes full-duplex in a single low-power, low-latency optical engine is a great match to the maximum bandwidth of PCIe x16 for next-generation compute and storage deployments," said Scott Schube, VP of Marketing at Nubis Communications. "Our demonstration of the Nubis XT1600 linear optical engine and Alphawave Semi’s PCIe 6.0 Controller and PHY IP showcases the viability of a PCIe 6.0 x8 link over optical fiber at 64 GT/s."
Optical PCIe technology can significantly extend link distances without sacrificing bandwidth, compared to traditional copper cables. This capability is crucial for supporting larger AI/ML server clusters, distributed over multiple nodes, and paves the way for innovation in new disaggregated network architectures.
Sampling for the Nubis XT1600 linear optical engine has started and interested parties can contact the company. Meanwhile, it should be noted that the solution by Nubis and Alphawave is a custom solution that has nothing to do with PCI-SIG's optical PCIe initiative.
"AI applications are reshaping data center networks, with hyperscalers deploying increasingly large clusters of disaggregated servers distributed over longer distances," said Tony Chan Carusone, CTO at Alphawave Semi. "This shift has generated heightened interest in PCIe over Optics among several of our customers. Through our collaboration with Nubis, we’re pleased to demonstrate how we’re leveraging Alphawave Semi’s leadership in connectivity IP and silicon to enable PCIe optical connectivity solutions that accelerate high-performance AI computing and data infrastructure."
Stay On the Cutting Edge: Get the Tom's Hardware Newsletter
Get Tom's Hardware's best news and in-depth reviews, straight to your inbox.
Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.
-
KraakBal The title says x16 but the article says x8. Does the reporter even know what pcie is?Reply -
bit_user
The article says "x8 per fiber", which implies two fibers are needed for a x16 link.KraakBal said:The title says x16 but the article says x8.
It goes on to say:
"Our high level of integration with 16 lanes full-duplex in a single low-power, low-latency optical engine ... "So, it sounds like they indeed do do have a full x16 lane solution.
Very much.KraakBal said:Does the reporter even know what pcie is? -
bit_user What I wonder is whether we can anticipate such high-speed optical being routed through PCBs, or will it always require optical cables?Reply -
strobolt
I would believe so (based on no real information).bit_user said:What I wonder is whether we can anticipate such high-speed optical being routed through PCBs, or will it always require optical cables?
My reasoning is that it would just require thicker copper on the PCB to enable that. In the article they were also saying that optical enables greater lengths *without* sacrificing bandwidth. I'm reading that as the the copper wouldn't yet be at its maximum bandwidth (in theoretical physical terms, if such exist) and the problem is mostly about controllers, use cases and costs that prohibit implementation on consumer grade products. -
bit_user
So, that's not at all what I meant. I wanted to know if you could put glass traces in a PCB and route such high-speed optical links through it.strobolt said:I would believe so (based on no real information).
My reasoning is that it would just require thicker copper on the PCB to enable that.
I'm not an electrical engineer, but I've been following the development of PCIe pretty closely. They apparently had such issues with bandwidth beyond PCIe 5.0 that, instead of doubling the frequency for 6.0, they used PAM4. That's not a free lunch, either, because it requires a better signal-to-noise ratio. That, in turn, will tend to require more expensive motherboards with more layers, in order to reduce interference. And this is already presuming you have retimers, I'm pretty sure (since they seem to be somewhat ubiquitous).strobolt said:In the article they were also saying that optical enables greater lengths *without* sacrificing bandwidth. I'm reading that as the the copper wouldn't yet be at its maximum bandwidth (in theoretical physical terms, if such exist) and the problem is mostly about controllers, use cases and costs that prohibit implementation on consumer grade products.
We haven't even talked about power, but that's another issue with trying to push ever higher frequencies over copper. AMD has stated that embracing optical interconnects will become a necessity for continuing to scale performance, over the next decade. Otherwise, the poor efficiency of scaling copper links would eat up too much of your power budget. -
TJ Hooker
Neither this article nor the source press release contain the quote "x8 per fiber", they refer to a "PCIe 6.0 x8 link". The only per fiber metric they mention is "64GT/s per fiber", which is just the per-lane bit rate of PCIe 6.0.bit_user said:The article says "x8 per fiber", which implies two fibers are needed for a x16 link.
Yeah, this explanation does seem the most likely, i.e. they're touting a solution beyond what was presented here. I do think the way they presented it is sort of confusing though. Like 'by demonstrating an x8 PCIe link, we show we can support a x16 PCIe link!' If it's easy to scale their technology up to a x16 link, why not use that for the demo?bit_user said:It goes on to say:
"Our high level of integration with 16 lanes full-duplex in a single low-power, low-latency optical engine ... "
So, it sounds like they indeed do do have a full x16 lane solution. -
bit_user
I was paraphrasing, in order to simply and clearly relate it to what the poster said.TJ Hooker said:Neither this article nor the source press release contain the quote "x8 per fiber", they refer to a "PCIe 6.0 x8 link". The only per fiber metric they mention is "64GT/s per fiber", which is just the per-lane bit rate of PCIe 6.0.
Okay, so the entire quote is
"This setup achieves PCIe 6.0 x8 optical link at 64 GT/s per fiber, without retimers."
Upon rereading, I see that the lack of commas makes it somewhat ambiguous exactly what's being said.
The original press release says:
“Our high level of integration with 16 lanes full-duplex in a single low-power, low-latency optical engine is a great match to the maximum bandwidth of PCIe x16 for next-generation compute and storage deployments,” said Scott Schube, VP of Marketing at Nubis Communications. “Our demonstration of the Nubis XT1600 linear optical engine and Alphawave Semi’s PCIe 6.0 Controller and PHY IP showcases the viability of a PCIe® 6.0 x8 link over optical fiber at 64 GT/s.”
Okay, so I read that as saying:
Their solution is designed to handle PCIe 6.0 x16 with a single optical engine.
They demonstrated it running at x8.
I agree with you on all points. My guess about why they only demo'd it at x8 is either due to the complexity of prototyping at x16 or maybe due to some bug or issue they expect to be resolved in the final product. Or, perhaps they lacked some infrastructure needed to demo it at full x16?TJ Hooker said:Yeah, this explanation does seem the most likely, i.e. they're touting a solution beyond what was presented here. I do think the way they presented it is sort of confusing though. Like 'by demonstrating an x8 PCIe link, we show we can support a x16 PCIe link!' If it's easy to scale their technology up to a x16 link, why not use that for the demo?
In any case, I think it's not a bad snapshot of the current tech. For folks like us, that's enough.