What are the max bandwiths for pcie 1 16x, 2 16x, and 3 16x?

gloege

Honorable
Nov 13, 2012
27
0
10,530
Anyone know what the complete rendered 16x pcle 1, 2, and 3 gb's are or at least a pic to post to my facebook. And I know they top over 100 gb all of them have to? Don't just say 8gb, 16gb, or 32gb. That is not true, lies lies lies! then all cards on the maket would bottle neck on pcie 3, right? My asus 9500gt had more bandwith then a pcie 1 16x and I over clocked it to the max how did that work?! This is so frustrating, please help. The dumbed out gamer.
 

gloege

Honorable
Nov 13, 2012
27
0
10,530
And if I am running a galaxy 670 4gb vram on a pcie 16x 1 how much performance hit would I take I mean evey game that's come out metro, mafia 2, far cry 3, just cause 2, every thing I throw at it runs smoth, why?
 

gloege

Honorable
Nov 13, 2012
27
0
10,530
And if I am running a galaxy 670 4gb vram on a pcie 16x 1 how much performance hit would I take I mean evey game that's come out metro, mafia 2, far cry 3, just cause 2, every thing I throw at it runs smoth, why?
 
PCI-e provides 4/8/16GB/s for a 16x lane for 1.0/2.0/3.0.

What makes you think that a graphics card should need more than 16GB/s, that is an obscene amount of data flowing very quickly. The tech powerup article linked above shows pretty clearly most graphics cards don't take a significant hit from a PCI-e 1.0 16x link, they have no need to push that much data through the link. Fully rendered and rasterized 1080P is only 373 MB/s, 4GB/s is 10x that, and then processing gets done to extrapolate that into more data.
 
1. lose the sass

2. Your GPU memory bandwidth and video link bandwidth will both be significantly higher than the PCI-e bandwidth because the GPU gets sent the models and the textures then generates the image, the GPU doesn't get sent entirely new data every frame, much is shared data that it keeps having to pull from the memory. The memory bandwidth on GPUs has to be ridiculously high because AA works by creating the image at 2/4/8/16x the original resolution then downsampling it to the intended resolution, in order to do that you must put a much large image into memory then read it out, downsample it, then put it back in, there is a large amount of bandwidth needed. The graphics output lines are designed to support very high resolutions and frame rates for the future so they have large bandwidths too, not because the GPU can use them, but because thats just what they have.