I will be the first to say that my knowledge on these subjects is limited however, from what I've read SLI on the current chipsets appears to be largely wasted?
Gigabyte uses the NF200 chip to control all of the CPU PCIE lanes, while other vendors are only using it for the augmented lanes.
For example, ASUS Maximus IV runs 8/8 or 8/16/16. They do not use the NF200 in a dual card config, while Gigabyte does.
It's worth noting though, that even though the NF200 may give you 16/16 physical lanes, it is still running off of the CPU, and the CPU only has 16 total physical lanes for current 2000 series CPUs. The remaining true lanes run off the P67 or Z68 PCH.
So say you were running 16/16 SLI is thats possible. Those 32 lanes are still only communicating to the CPU via a 16 lane connection.
Basically what that means is that you aren't necessarily going to see an improvement running 16/16 vs 8/8 on the sandybridge platform. So the benifit of the second GPU is diminished.
Is my reasoning sound here?
Gigabyte uses the NF200 chip to control all of the CPU PCIE lanes, while other vendors are only using it for the augmented lanes.
For example, ASUS Maximus IV runs 8/8 or 8/16/16. They do not use the NF200 in a dual card config, while Gigabyte does.
It's worth noting though, that even though the NF200 may give you 16/16 physical lanes, it is still running off of the CPU, and the CPU only has 16 total physical lanes for current 2000 series CPUs. The remaining true lanes run off the P67 or Z68 PCH.
So say you were running 16/16 SLI is thats possible. Those 32 lanes are still only communicating to the CPU via a 16 lane connection.
Basically what that means is that you aren't necessarily going to see an improvement running 16/16 vs 8/8 on the sandybridge platform. So the benifit of the second GPU is diminished.
Is my reasoning sound here?