Ugh, that's a kinda long question to answer XD
To put it in a simple way, graphic cards can be "held back" with 2 major factors that need to be balanced out: Memory bandwidth and GPU "computing power" (or "number crunching" so to speak).
Inside the "memory bandwidth"part, there's the "memory interface" that you mention, wich allows de GPU to address and move more memory per cycle, but the memory itself works at a certain speed (tha Mhz brotha, lol). So, to achieve a certain "memory bandwidth" you have to mix both: speed and interface. That's the GB/s a card has for internal memory movement, which is the bottom line regarding the memory bandwidth.
The GPU processing is the other side of the equation, but it's a way way more complex scenario to explain than memory... Now, there's a rule of thumb that applies though: the more "stream processors"/"cuda cores" (ATI/nVidia resp.), "ROPs", Mhz, memory interface, etc, the better.
So, at the end, you actually look a video card that has to be balanced out giving or taking from all those variables.
TL;DR: 128bits + fast VRAM ~= 256bits + slow VRAM memory wise for a certain GPU spec/type. 256bits (or more) + fast VRAM is king; rule of thumb.