Bandwidth is the most important factor in performance.
Update:
Core Clock: The frequency at which the GPU is running. This can be sort of compared to a CPU's operating frequency. "Speed" depends on numerous factors, architecture being one of them. It's not necessarily an apples-to-apples comparison to look at the core clock speed of an older GPU and a newer one (or cross-brand differences), but for sake of ease, greater core frequencies equates faster computing. Be careful with that, though, calculating speed in gaming is never quite so linear.
Memory Clock: Quite simply, this is the speed of the video card's onboard memory. As above, the memory clock helps calculate memory bandwidth; a higher memory bandwidth equates better performance for anti-aliasing and other memory-intensive tasks.
Memory Interface: This is the memory's actual bus width, typically in the form of 128-bit, 256-bit, or 384-bit. The memory interface is used to calculate total bandwidth. A bigger interface means a bigger pipe. A smaller interface can be compensated for by faster memory clock speeds or different types of memory.
Memory Bandwidth: This is one of the single, most important aspects of graphics processors. Memory bandwidth determines your card's ability to utilize its onboard video RAM efficiently when under stress. Think of it like the lanes on a highway: if you have a highway with 3 lanes that is perpetually congested, then you add 3 more lanes to it over the weekend, you'll see a significant decrease in traffic (if not outright elimination of congestion). The same is true for GPUs: having tons of GDDR5+ memory won't do any good if the pipe is too small to use it in time.
Memory bandwidth is calculated by memory type (i.e., GDDR5, GDDR4, etc.), the memory clock, and the actual memory width. Calculate the maximum memory bandwidth by multiplying the memory clock by the memory width and the transfers-per-clock of the memory type.