Hi All,
I was researching on how GT 240 GDDR3 differed with its GDDR5 counterpart and if having PCI 1.0 -> 3.0 would matter as a bottleneck.
Here are the bandwidth i got from wikipedia:
PCI Express 1.0 (x32 link) = 64 Gbit/s 8 GB/s
PCI Express 2.0 (x16 link) = 64 Gbit/s 8 GB/s
PCI Express 3.0 (x16 link) = 102.4 Gbit/s 12.8 GB/s
PCI Express 2.0 (x32 link) = 128 Gbit/s 16 GB/s
What I want to understand is how the computation really works, since the rated Bandwidth, say for the GT 240 GDDR5, is at 54.4 GB/s, which is so much higher than those specified by the bus bandwidth capcities.
http://en.wikipedia.org/wiki/GeForce_200_Series
Help how to interpret these numbers?
I was researching on how GT 240 GDDR3 differed with its GDDR5 counterpart and if having PCI 1.0 -> 3.0 would matter as a bottleneck.
Here are the bandwidth i got from wikipedia:
PCI Express 1.0 (x32 link) = 64 Gbit/s 8 GB/s
PCI Express 2.0 (x16 link) = 64 Gbit/s 8 GB/s
PCI Express 3.0 (x16 link) = 102.4 Gbit/s 12.8 GB/s
PCI Express 2.0 (x32 link) = 128 Gbit/s 16 GB/s
What I want to understand is how the computation really works, since the rated Bandwidth, say for the GT 240 GDDR5, is at 54.4 GB/s, which is so much higher than those specified by the bus bandwidth capcities.
http://en.wikipedia.org/wiki/GeForce_200_Series
Help how to interpret these numbers?