OK, let me throw a crude analogy out there to see if it's helpful.
Imagine you are working from lots of pieces of paper, using information from the paper to produce a finished product. The finished product is also stored on a number of pieces of paper. Let's imagine too (for the sake of the analogy) that you can't stack paper on top of each other, so each piece of paper needs an amount of space accessible to it.
In that case, the size of your desk determines how many pieces of paper you can work from simultaneously, and how large your finished product can be.
Think of VRAM as the size of your desk. As long as everything you need fits on the desk, having extra space (or extra VRAM) won't make things any faster at all, it's just unused space. However, as soon as you run out of space, you run into big problems. If a graphics card runs out of VRAM it will start shifting things into system RAM, which (going back to the dodgy analogy) is little like using a filing cabinet. The information is still accessible if you need it, but it takes time (competitively a massive amount of time) to retrieve the information... and remember that before you can look at it, you have to file another piece of paper in system RAM to clear up desk space for the information you're trying to retrieve.
A faster GPU can process information more quickly (like having a more efficient worker), but it is VRAM that determines how much data a card can work with at any one time (how big the desk is).
In other words, I would expect VRAM usage to be more or less identical for different cards, provided you're using the same game at the same resolution and settings.
What needs more VRAM? Things like:
- High resolutions (like your 3x1200p setup), the "finished product" (the rendered frame) is much larger in this case, as well as the sheer amount of pixels that have to be processed and generated
- Post processing such as AA
- High resolution textures
- etc etc.
Hopefully that's helpful?
Or others may like to comment on this analogy?