PCI Express 3.0: On Motherboards By This Time Next Year?

Motherboard-Based Interconnects

AMD and Intel have never been particularly chatty when it comes to detailing the interfaces they use to communicate between chipset components, or even been logic blocks within a northbridge/southbridge. We know the data rates at which those connections run, and we know that they're generally designed to be as bottleneck-free as possible. Sometimes we even know where a certain piece of logic came from, such as the Silicon Logic-based SATA controller AMD used in its SB600. But we're often kept in the dark as to the technology used in building the bridge between components. PCI Express 3.0 certainly presents itself as a very attractive solution, similar to the A-Link interface AMD employs.

The recent emergence of USB 3.0 and SATA 6Gb/s controllers on a number of third-party motherboards may provide a glimpse into this process. Because Intel's X58 chipset does not provide native support for either technology, companies like Gigabyte had to integrate discrete controllers onto their boards using available connectivity.

Gigabyte’s EX58-UD5 motherboard did not have USB 3.0 or SATA 6Gb/s. However, it did include a x4 PCI Express slot:

Gigabyte replaced the EX58-UD5 with the X58A-UD5, which has support for two USB 3.0 and two SATA 6Gb/s ports. Where did Gigabyte find the bandwidth to support the new technologies? By using one lane of PCI Express 2.0 connectivity for each controller, cutting back on available external connectivity while adding functionality to the board, overall.

Besides the addition of support for USB 3.0 and SATA 6Gb/s, the only other real difference between the two motherboards is that the newer offering had its x4 slot removed.

Will PCI Express 3.0, like the standards that preceded it, wind up serving as an enabler of future technologies and controllers that won't make it into the next generation of chipsets as integrated features? Almost certainly.

Create a new thread in the US Reviews comments forum about this subject
This thread is closed for comments
69 comments
    Your comment
    Top Comments
  • tony singh
    What the..... pcie3 already devoloped & most games graphics are still of geforce 7 level thnk u consoles..
    23
  • shortbus25
    NVidia=Global Warming?
    14
  • Other Comments
  • cmcghee358
    Good article with some nice teases. Seems us regular users of high end machines won't see a worth until 2012. Just in time for my next build!
    8
  • tony singh
    What the..... pcie3 already devoloped & most games graphics are still of geforce 7 level thnk u consoles..
    23
  • darthvidor
    just got pci-e 2.0 last 2008 with my x58 ... time's flying
    5
  • iqvl
    Good news to peoples like me who haven't spent any money on PCIE 2.0 DX11 card due to nVidia's delay in shipping GTX460.

    Can't wait to see PCIE 3.0, native USB3/SATA3, DDR4, quad channel and faster&cheaper SSD next year.

    In addition, I hate unreasonably priced buggy HDMI and would also like to see the Ethernet cable(cheap, fast and exceptional) based monitors as soon as possible.
    http://www.tomshardware.com/news/ethernet-cable-hdmi-displayport-hdbaset,10784.html

    One more tech that I can't wait to see: http://www.tomshardware.com/news/silicon-photonics-laser-light-beams,10961.html

    WOW, so much new techs to be expected next year!
    9
  • ytoledano
    Processor speed *is* increasing exponentially! Even a 5% year-on-year increase is exponential.
    -12
  • Casper42
    I havent read this entire article but on a related note I was told that within the Sandy Bridge family, at least on the server side, the higher end products will get PCIe 3.0.

    And if you think the Core i3/5/7 desktop naming is confusing now, wait till Intel starts releasing all their Sandy Bridge Server chips. Its going to be even worse I think.

    And while we're talking about futures, 32GB DIMMs will be out for the server market most likely before the end of this year. If 3D Stacking and Load Reducing DIMMs remain on track, we could see 128GB on a single DIMM around 2013, which is when DDR4 is slated to come out as well.
    -6
  • JonnyDough
    Quote:
    After an unfortunate series of untimely delays, the folks behind PCI Express 3.0 believe they've worked out the kinks that have kept next-generation connectivity from achieving backwards compatibility with PCIe 2.0. We take a look at the tech to come.


    It's nice to see the backwards compatibility and cost be key factors in the decision making. Especially considering that devices won't be able to saturate it for many years to come.
    3
  • rohitbaran
    Quote:
    Nothing in the world of graphics is getting smaller. Displays are getting larger, high definition is replacing standard definition, the textures used in games are becoming even more detailed and intricate.

    Even the graphics cards are getting bigger! :lol:
    9
  • iqvl
    rohitbaranEven the graphics cards are getting bigger!

    I believe that he meant gfx size per performance. :)
    -3
  • Tamz_msc
    Quote:
    We do not feel that the need exists today for the latest and greatest graphics cards to sport 16-lane PCI Express 3.0 interfaces.
    Glad you said today, since when Crysis 3 comes along its all back to the drawing board, again.
    -3
  • rohitbaran
    iqvlI believe that he meant gfx size per performance.

    Still, the largest cards today are a bit too large! Aren't they!
    0
  • qhoa1385
    NO!
    I HATE YOU TECHNOLOGY!

    lol
    -8
  • descendency
    rohitbaranEven the graphics cards are getting bigger!

    And thanks to NVidia, hotter.
    7
  • LordConrad
    "After an unfortunate series of untimely delays..."

    A series of unfortunate events? That sounds familiar...
    8
  • shortbus25
    NVidia=Global Warming?
    14
  • Anonymous
    Very pleased with all this, looks like 2012 Q1/2 will be my new PC build, should all come together nicely then!
    2
  • ta152h
    This article could have been written in a sentence. PCI-E 3.0 will be out in 2011 and will be faster.

    Perhaps you could have explained why CUDA would benefit from this, or what type of apps that use it could. Fusion makes no sense to me, since the GPU and CPU will not be connected using PCI-Express, and be on the same die. Maybe you could explain why these things are going to benefit.

    Also, according to the visual, latency will be lowered. Bandwidth is essentially irrelevant in many situations, since it's only rarely fully used, but latency could make itself felt in virtually anything.

    You also could have included the extra power use this extra speed will take. It almost certainly will, all other things being equal. That's a huge consideration. If I have to add, say 15 watts to my motherboard, is it worth it for a technology that might not be relevant for many situations, in the relative near term? If it's one or two watts, it's a no brainer, but, if it's a lot higher (which I suspect it might be), people need to really ask if they need this technology, or if it's better to wait until the next purchase, when it might have more value.
    1
  • Mousemonkey
    178763 said:
    And thanks to NVidia, hotter.

    With a bit of help from ATi of course. [:mousemonkey]
    -6
  • cmartin011
    they should integrate Intel's new optic technology in to it give it twice the bandwidth on top of that 64 gb/s or more
    1
  • hardcore_gamer
    finally.........
    -4