Sign in with
Sign up | Sign in
Your question
Closed

PCI 2 @ x16 vs. PCI 3 @ x8

Last response: in Graphics & Displays
Share
January 14, 2012 5:02:19 PM

So I recently built my first computer. When I was looking at mobo's, I did tons and tons of research and settled on the ASUS P8Z68-V/GEN3. I noticed there was some funny thing about how it has two PCI 2 lanes @ x16 (which are 3.0 ready), but that they were x16 if single and x8/x8 if dual. It was late and I decided to look into that tidbit in the morning.

Of course, I forgot to look into it. I bought the motherboard and built the rest of the computer, and now I'm thinking about doing an eyefinity steup with three monitors and two of those brand spanking new HD 7970 graphics cards from AMD in crossfire mode. Once I started looking into this, I realized my slight error - that "dual @ x8/x8" thing means that when you have graphics cards in both the PCI slots, they will run at x8 each instead of x16.

At normal resolutions, this wouldn't be a problem because the graphics cards wouldn't even begin to approach a bottleneck @ x8. I also did some hunting around and found that on an eyefinity setup, I could see anywhere from a 3-10% loss in performance due to this oversight (grrrr).

I know that is not very much, but it bugs me that I could have just purchased a different motherboard and not had this decrease in performance. There is, however, a theoretical question I'd like to ask. These new AMD graphics cards utilize PCI 3.0, something for which my motherboard is prepared. Is there a chance I won't notice ANY sort of performance decrease because the dual x8 will be made up for by the fact that it is PCI 3 and not PCI 2? That is, theoretically speaking (since I couldn't find any benchmarks), is PCI 3 @ x8 as fast as (or faster than) PCI 2 @ x16?

There's a couple of sites I used to gather some info:
http://www.overclock.net/t/1188376/hardwarecanucks-hd-7970-pci-e-3-0-vs-pci-e-2-0-comparison - this one is very relevant as it compares a 7970 on PCI 3 vs. a 7970 on PCI 2
http://forums.overclockers.com.au/showthread.php?t=930170 - this demonstrates the difference between x8/x8 and x16/x16 with two high end AMD graphics cards in crossfire mode on an eyefinity setup (about a year ago, was the most recent I could find)

Anyway, it's clear a single HD 7970 in PCI 3 does not give any sort of performance gain when compared to a HD 7970 in PCI 2 (with both at x16). Additionally, there is little to no difference between single cards at x8 and x16 in PCI 2, but there is a measurable difference in eyefinity-resolution crossfire setups between x8 and x16. Is there any reason to believe that PCI 3 could make up this difference? I'd feel a lot better if it did.

Sorry for the tome!

More about : pci x16 pci

January 14, 2012 10:09:03 PM

Bump...if you don't want to read my huge post, just answer this question: theoretically, which is faster, PCIe 2.0 @ x16 or PCI 3.0 @x8 (or are they the same?)?
a b U Graphics card
January 14, 2012 10:24:25 PM

PCIe 3.0 handles about twice as much bandwidth as PCIe 2.0 can, so therefor, in theory PCIe 2.0 @ x16 lanes is equal to x8 PCIe 3.0 lanes.

You should not have any bandwidth restrictions with PCIe 3.0.
Related resources
a c 251 U Graphics card
January 14, 2012 11:11:42 PM

With the Pci-e 3.0 slots on the motherboard being utilized because you actually have a card that is Pci-e 3.0 , you should notice little or no difference in the fact that it is x8 because both slots are occupied. The one thing that you have to keep in mind is that these benchmarks are being run and measured in Windows with software and in real life usage you may not be able to actually notice any difference above a certian frame rate. Can you notice the difference in the display if you have 100fps in one instance and then have 150fps in another?
January 14, 2012 11:25:35 PM

One caveat is that Pci-e 3.0 isn't supported by Sandy Bridge CPUs. You are going to have to wait until Ivy Bridge is released to take advantage of the increased bandwith.
a b U Graphics card
January 14, 2012 11:26:08 PM

PCIe 3.0 is not implemented on Sandy Bridge - have to wait for Ivy. Boards are PCIe 3.0 capable.
-Bruce
January 15, 2012 10:44:42 AM

I find these last couple of replies stunning - mobo companies developed, slanderously advertised, and released motherboards with PCIe 3.0 knowing there would be no performance benefit? I mean, the fact that Ivy Bridge CPU's fit the 1155/1156 socket is (relatively) recent news, right? More recent than mobos w/ PCIe 3.0 support. I can understand if they released mobos with PCIe 3.0 capability knowing full well that no present graphics cards had enough processing power to take advantage of the benefits (that would be a "standard" advertising gimmick, imo), but the fact that they made PCIe 3.0 capable mobos with the 1155/1156 socket knowing full well that it wasn't supported by those CPUs is staggering.

I suppose I don't know enough about PCIe (generally speaking). It made sense to me when it was just a motherboard interface connection (the way a motherboard communicates with certain devices), but the whole "CPU support" thing seems conceptually confusing. This might sound stupid, but why does the CPU care what the bandwidth of PCIe lanes are? It seems to me that different PCIe versions are just a measure of how fast the mobo can present information to the CPU. This would (obviously) not depend in any way on the CPU's capabilities (so long as we were not exceeding some theoretical maximum amount of data that the CPU can process).

Is that it? Does PCIe 3.0 transfer data at such a rate that Sandy Bridge CPU's cannot keep up? If so, that makes the mobo companies' "PCIe 3.0 rush" even more ludicrous.
January 15, 2012 10:53:34 AM

inzone said:
With the Pci-e 3.0 slots on the motherboard being utilized because you actually have a card that is Pci-e 3.0 , you should notice little or no difference in the fact that it is x8 because both slots are occupied. The one thing that you have to keep in mind is that these benchmarks are being run and measured in Windows with software and in real life usage you may not be able to actually notice any difference above a certian frame rate. Can you notice the difference in the display if you have 100fps in one instance and then have 150fps in another?


This is an awesome reply, but I think there is one problem - if you check out the links I made in the original post, I'm talking about an eyefinity setup. On a normal monitor, even with high resolution, I'd grant that there would probably be something like a difference between "100fps in one instance and...150fps in another", which would clearly be negligible. However, in the setup I mentioned, we're not talking about those kind of framerates. An eyefinity setup at maximum resolution really tests the capabilities of the card(s) - in the examples provided in benchmark tests, we're talking about the 20-50 range fps for modern games (once again, in an eyefinity setup @ maximum resolution with anti-aliasing, etc.). Once we're talking about those numbers, a theoretical difference of the sort we're talking about becomes more significant.

Best solution

a b U Graphics card
January 15, 2012 11:48:15 AM
Share

zabuzaxsta said:
This is an awesome reply, but I think there is one problem - if you check out the links I made in the original post, I'm talking about an eyefinity setup. On a normal monitor, even with high resolution, I'd grant that there would probably be something like a different between "100fps in one instance and...150fps in another", which would clearly be negligible. However, in the setup I mentioned, we're not talking about those kind of framerates. An eyefinity setup at maximum resolution really tests the capabilities of the card(s) - in the examples provided in benchmark tests, we're talking about 20-50 fps for modern games (once again, in an eyefinity setup @ maximum resolution with anti-aliasing, etc.). Once we're talking about those numbers, a theoretical difference of the sort we're talking about becomes more significant.



I think the results are greatly flawed in regards to the x8 vs x16 slots. He is using a P55 platform with a DMI of 2.5 GT/s and 4GB of dual channel RAM at x8 and comparing it to an x58 platform that has a qpi of 4.8GT/s and 6GB of dual channel RAM at x16. All this does is show that that upgrading to the x58 platform from the P55 platform is not really worth it for gaming. DMI and QPI are essentially the bus speeds in gigatransfers/second. I have read that even the fastest graphics cards in the world cannot saturate a x16 slot. This guy did a good comparison of x4/x8/x16.

http://youtu.be/rSfifE2Domo

I also did my own comparison since I had the same concerns with my motherboard which is either x16/0/x16/x0 or x8/x8/x16/x0. Keep in mind in both instances I have my 6990 in slot one which is x16 in the first test and x8 in the second. Also, a single 6990 shows as 2 cards in 3dmark11 so when you see 3 cards it is reporting trifire 6970/ What I did at first was leave the 6990 in slot one and one 6970 in slot 3 so that both cards were x16 and I ran 3dmark11.



I then installed a second 6970 in slot 2 which caused slot one to revert to x8 and ran the crossfire cable from slot one to slot 3 leaving slot 2 out of the crossfire setup. I also did the test with all 3 cards connected with crossfire cables but disabled the second 6970 in CCC and go the same results. The motherboard does not care whether or not the card is working in crossfire, it only cares that there is a card plugged in the slot which causes the number of available lanes to drop.





Theoretically, pci-e 2.0 x16 should have the same bandwidth as pci-e 3.0 x8 but I really don't think x8 vs x16 matters with current tech. Maybe the next gen 8xxx series will bottleneck but who really knows at this point.
January 15, 2012 5:34:39 PM

Alright, well it's good to know I won't be approaching a bottleneck at x8 even on an eyefinity setup.

Is it really the case, though, that in the distant future where we approach the bottleneck, PCIe 3.0 won't provide a performance boost w/o an Ivy Bridge CPU?
a c 271 U Graphics card
January 15, 2012 5:38:41 PM

zabuzaxsta said:
Bump...if you don't want to read my huge post, just answer this question: theoretically, which is faster, PCIe 2.0 @ x16 or PCI 3.0 @x8 (or are they the same?)?


Don't bump your thread as it could end in closure or deletion.

Don't...
Bump posts
http://www.tomshardware.com/forum/283384-33-read-first
January 15, 2012 7:27:34 PM

Whoops! Thanks for the heads up, mousemonkey. Sorry I didn't go over the forum rules first.
January 15, 2012 7:29:48 PM

Best answer selected by zabuzaxsta.
!