How to calculate PCI-express usage

IRUser

Reputable
Feb 16, 2014
16
0
4,510
So I'm trying to find out how to calculate PCI usage for any given GPU, but everywhere I look or search yields nothing. All I can find is the same old "history" on PCI-express and bandwidth.

I am looking for the formula which will allow me to take any GPU I desire and find out if it can fit in a pci-e 2.0 or 3.0 etc

Any ideas where I can find this mysterious ...mythical piece of math wonder ?

[strike]For example, I am here looking at this gtx 770 and it says
* "Memory Bandwidth (GB/sec) = 224.3" (As stated here : http://www.geforce.com/hardware/desktop-gpus/geforce-gtx-770/specifications)
What the heck does that even mean ?? Does it run at 100% in a PCI-e 2,0 slot ? Even tho the card is PCI-e 3.0 !?

* "PCI-e v3.0: 15.75 GB/s (128 GT/s)" (As stated here : http://en.wikipedia.org/wiki/PCI_Express)
What in Einsteins name is going here ??? [/strike]

I just want to find out this formula :
<Insert GPU here> : <insert Formula here> = <insert PCI-e here>

EDIT 1 : Sorry for the misunderstanding gentlemen, after reading some replies I re-formulate my question.
How can I calculate how much PCI any given GPU consumes ? Is there a formula for that or it's grunt work, I stick it in and use software ?
 
Solution

You can plug a 780Ti or a R9-295 in a PCIE 1.0 x16 slot and still get 90-95% of their performance in most games.

GPUs use almost no PCIE bandwidth aside from large data dumps to load textures and other resources. That's why there is almost no PCIE bandwdith scaling with most GPUs in most real-world applications.

InvalidError

Titan
Moderator
The GPU's memory bandwidth is between the GPU and the GDDR5 DRAM directly on the GPU card. It never crosses the PCIE bus except for the initial texture, geometry and shader programs loading - assuming the GPU has enough local RAM to store everything locally.

Once everything has been transferred from system memory to the GPU's local memory, there is almost no more traffic over PCIE other than draw calls. That's why there is less than a 10% difference going from PCIE 1.0 x16 to PCIE 3.0 x16 in most games.
 
Complicated question which is otherwise not needed. Your main concern is if the given graphics card is supported or otherwise. Remember a simple rule. All PCIe 3.0 are backward compatible. So start asking the right question which is : "Is given GPU like 770 compatible with the given mobo? That is it.
 

IRUser

Reputable
Feb 16, 2014
16
0
4,510
I see I have created more confusion with my question and learned in the process that the 2 things I linked are not the same thing. let me re-formulate my question then.

How can I calculate how much PCI can any given GPU consume ?

Sorry for the misunderstanding
 

InvalidError

Titan
Moderator

PCIE cards can consume up to whatever their interface width and speed will allow.

For PCIE 1.0 which does 2.5Gbps per lane, that would be up to 40Gbps each way. PCIE 2.0 doubles bandwidth to 5Gbps per lane, which makes it up to 80Gbps each way. PCIE 3.0 bumps that to 8Gbps, which makes it 128Gbps each way.

How much actual traffic GPUs generate during real-world use is a completely different story.
 
Still strange question. Read this for better understanding:

You are talking about bandwidth available for PCIe slot. This is depicted using PCIe slot width (1x, 2x, 4x, 8x, 16x) with 16x being the highest one with current gen GPUs. Another thing is the generation of the PCIe which is mentioned as PCIe X.0. PCIe 3.0 is twice as fast as PCIe 2.0, which is twice as fast as PCIe 1.0. Therefore, a 4x PCIe 3.0 lane offers bandwidth equivalent to an 8x PCIe 2.0 slot or a 16x PCIe 1.0 slot. The effect this has on actual performance can be far less significant than those figures may suggest though.

In terms of power, PCIe slot provide 75W power to the GPU which may not be enough for high end GPUs like GTX 760 or AMD R9 280.

And if you are asking how many PCI slots, does any card cover then this info is mentioned in the specs of the card. Something like that Dual-Slot, Single-Slot. Most of today's GPUs physically covers two slots while using only one slot.

If none of the above is what you have asked for then I'm sorry, I could not understand the question. :)
 


2.5Gbps per lane with 40Gbps each way means that there are actually 16 lanes for communication between CPU and PCIe sub-system, so 2.5 x 16 = 40Gbps. Just making it clear to the OP. Hopefully you would not mind it. :)
 

IRUser

Reputable
Feb 16, 2014
16
0
4,510


Most of those things I know, lanes, x4 x8 backward compatibility.
The reason I want to find out how much GPUs consume out of the PCI bus is to know when a card will exceed it and get "bottlenecked"

EX if i can jam a gtx 650 TI in a PCI-e x16 gen 1.0 and the GPU works at 100% or less (AKA gets bottleneck)

EDIT : and many more examples, like a future gtx 1000 series if it fits in a pci-e 2.0 and works at maximum capacity
 

InvalidError

Titan
Moderator

I almost went back to edit and add that just to be painfully explicit :)

In any case, OP wants to know how much PCIE bandwidth GPUs use and the answer is: there is no universal answer so quit asking for one. There are traffic spikes when resources get upload to the GPU's memory and those may consume up to whatever the interface allows regardless of what the GPU model is. Aside from that, there is relatively little traffic as long as the GPU has enough local RAM to keep everything it needs local and this again, is regardless of what the GPU model is.
 

InvalidError

Titan
Moderator

You can plug a 780Ti or a R9-295 in a PCIE 1.0 x16 slot and still get 90-95% of their performance in most games.

GPUs use almost no PCIE bandwidth aside from large data dumps to load textures and other resources. That's why there is almost no PCIE bandwdith scaling with most GPUs in most real-world applications.
 
Solution

IRUser

Reputable
Feb 16, 2014
16
0
4,510
Although not exactly what I was looking for, it seems that I have nothing to worry about anytime soon ...more like never.
I would like to thank everyone that answered in this thread. Thank you for your time and replies.