PCIe at the Speed of Light: PCI-SIG Forms Optical Workgroup

colorful optical fiber cables against black background
(Image credit: Shutterstock)

As data transfer rates supported by the PCIe protocol are increasing, it is getting exceedingly hard to enable them over copper wires over longer distances. On Wednesday PCI-SIG announced formation of the PCI-SIG Optical Workgroup, which will prep PCIe for using optical interconnects. Over time PCIe optical interconnects will be particularly beneficial for artificial intelligence, datacenter, and high-performance computing applications.

Implementation of extended PCIe 5.0 (32 GT/s) connections in servers requires usage of PCIe 5.0 retimers due to signal degradation over increased distances, which generally reduces trace lengths of PCIe 5.0 interconnects when compared to older specifications. This distance can fluctuate based on the selection of materials and environmental conditions, but, in general, PCIe 5.0 retimers are common in modern AI and HPC machines. The challenge of signal integrity over copper interconnections becomes even more pronounced with the future introduction of PCIe 6.0 at 64 GT/s and PCIe 7.0 at 128 GT/s, which is why PCI-SIG is exploring alternatives in the form of optical interconnects.

The workgroup aims to remain optical technology-agnostic, accommodating various optical technologies and potentially creating technology-specific form-factors. Optical interconnects can potentially bring numerous benefits to PCIe and CXL connections, including improved performance, lower power consumption (PCIe 5.0 retimers consume well over 10W), extended reach, and reduced latency. 

"Optical connections will be an important advancement for PCIe architecture as they will allow for higher performance, lower power consumption, extended reach and reduced latency," said Nathan Brookwood, a Research Fellow at Insight 64. "Many data-demanding markets and applications such as Cloud and Quantum Computing, Hyperscale Data Centers and High-Performance Computing will benefit from PCIe architecture leveraging optical connections."

While existing PCI-SIG workgroups continue their progression towards achieving a 128GT/s data rate in the PCIe 7.0 specification, the newly formed Optical Workgroup will focus on making PCIe architecture more compatible with optical technologies. At this point PCI SIG has not disclosed which version of PCIe will take advantage of optical interconnects. 

"We have seen strong interest from the industry to broaden the reach of the established, multi-generational and power-efficient PCIe technology standard by enabling optical connections between applications," said PCI-SIG President and Chairperson Al Yanes. "PCI-SIG welcomes input from the industry and invites all PCI-SIG members to join the Optical Workgroup, share their expertise and help set specific workgroup goals and requirements."

Anton Shilov
Contributing Writer

Anton Shilov is a contributing writer at Tom’s Hardware. Over the past couple of decades, he has covered everything from CPUs and GPUs to supercomputers and from modern process technologies and latest fab tools to high-tech industry trends.

  • bit_user
    This part bears repeating:
    "PCI-SIG welcomes input from the industry and invites all PCI-SIG members to join the Optical Workgroup, share their expertise and help set specific workgroup goals and requirements."
    No goals, requirements, or timeline. Whatever they come up with most likely won't land before PCIe 7.0. It's even more uncertain if or when it could reach desktop or laptop PCs.
    Reply
  • Kamen Rider Blade
    We still don't have "External OCuLink" yet.
    Reply
  • InvalidError
    bit_user said:
    No goals, requirements, or timeline. Whatever they come up with most likely won't land before PCIe 7.0. It's even more uncertain if or when it could reach desktop or laptop PCs.
    We've reached the point where pushing more bandwidth over PCBs may no longer be economically viable. I doubt laptops will bother with faster PCIe or optical since everything is so tightly integrated and power-conscious for the most part.

    For desktops though, I could imagine optical connections being handy to break systems into 2-4 major components: 1- a fully self-contained CPU module 2) a front IO module 3) a rear IO module and 4) a motherboard (optional) for desktop-style internal expansion. If you don't need internal expansion, you get the CPU, front/back IO and stop there. Want to upgrade the CPU or switch from x86 to RISC-V? Only need to swap out the CPU module.

    Kamen Rider Blade said:
    We still don't have "External OCuLink" yet.
    If copper is no longer deemed good enough for internal connections, you can stick a fork in anything external.
    Reply
  • Eximo
    400Gbps ethernet is out there. You got $300 for five meters of fiber and $1800 for two transceivers? Couldn't find any 400Gbps NICs for sale, but 200GBps ones are only $4,000!

    Still, possible that optical setups will allow for PCIe 1x at 7.0 speeds at some point that isn't cost prohibitive. Got to have that 16K VR experience after all.
    Reply
  • bit_user
    InvalidError said:
    We've reached the point where pushing more bandwidth over PCBs may no longer be economically viable. I doubt laptops will bother with faster PCIe or optical since everything is so tightly integrated and power-conscious for the most part.
    I was going to say that the only way I see laptops using it would be for power-savings. That would depend on silicon photonics being markedly more power-efficient, though I don't know if that's feasible.
    Reply
  • InvalidError
    Eximo said:
    400Gbps ethernet is out there. You got $300 for five meters of fiber and $1800 for two transceivers? Couldn't find any 400Gbps NICs for sale, but 200GBps ones are only $4,000!
    The main reason specialty fiber cables cost so much is low volume. The cable itself would likely come down to $15-20 if everyone needed some for everything. For Tbps-scale optical PCIe, we'd likely be talking on-package photonics to eliminate in-between interfaces. No NIC, no PCIe between the CPU and NIC, no transceiver to plug into the NIC, the SoC talks whatever the on-package bus might be straight to the optical MAC-PHY stack. How cheap the interface may get would be entirely dependent on how expensive the PHY might be.

    bit_user said:
    I was going to say that the only way I see laptops using it would be for power-savings. That would depend on silicon photonics being markedly more power-efficient, though I don't know if that's feasible.
    How much stuff in a laptop or even a desktop genuinely requires more bandwidth than 4.0x16? Not really anything at the moment for a remotely normal person, the GPUs that might gain the most from it only have an x8 or worse interface. As an SI, would you shoulder all of the risks and additional manufacturing considerations that would come with handling CPU, GPU and IO chips that have glass fiber pigtails attached while assembling a laptop and having to splice those together afterward to save maybe 2W vs 5.0x4 on copper? I know I'd rather have a plug-in SSD which I can easily move elsewhere for data recovery, backup or replacement if necessary and no amount of power savings would make me give up on that. You could slap connectors on those pigtails, though I'd be worried about optical connectors slim enough to go in slim form factors that may require special tools to disconnect without breaking something.

    On desktop, at least you have plenty of room to package major components in such a way as to give them SC/LC/whatever connectors with loops of slack inside for strain relief or repair.
    Reply