You will want the 4GB RAM model if you plan on using a high resolution setup like a 2560X1440 an higher resolution screen. Or a multi-monitor setup which also has high resolution like 3 1080p monitors. The higher the resolution the more RAM comes into play. So if your just using 1080p you will be fine with the 2GB version and save some money. But if you plan on going high res I would get the 4GB models.
And you are correct RAM does not stack in SLI and as far as I know there isn't any updates in the future to change this.
I found this on Hardware Canuck's review of the Galaxy GTX 770 GC 4GB card:
"If I can veer a bit off course for a moment, the realities of today’s games and tomorrow’s applications need to be discussed before going too far into the GC 4GB’s successes and failures. With the optimizations in DX11, even the most demanding games are requiring less frame buffer capacity than ever. Next generation DX11.1 equipped console development will bring the focus towards a further streamlining of game engines, so highly detailed environments won’t require memory hogging, inefficient high resolution texture maps. As many game developers have already stated on and off the record, this will lead to an increase in the amount of raw processing power required to render a scene and a significant drop in the local memory requirements. What does a situation like this mean to cards like the GC 4GB? Now and in the future, its core processing performance will likely become a bottleneck long before more than 2GB of 7Gbps memory is required to provide a smooth gaming experience.
Naturally, the GC’s primary selling point is that 4GB of GDDR5 which panders to an odd theory some have that more memory is always better. The additional allotment may arguably be beneficial to framerates at even higher multi monitor and 4K resolutions but we’d beg to differ. As we’ve seen again and again, increased memory size will hardly ever allow a card to return completely playable framerates where the reference version could not. The reason for this is simple: the architecture itself becomes a bottleneck long before framebuffer limitations are reached.
In the grand scheme of things, at ultra high single monitor resolutions, the 4GB of memory really doesn’t make all that much of a difference in average framerates. However, in some rare instances like Crysis 3, it prevents framerates from plunging down into unplayable territory every now and then and that makes a huge difference in perceptual onscreen performance. That’s actually quite important since a sense of fluidity can be maintained without resorting to higher clock speeds." http://www.hardwarecanucks.com/forum/hardware-canucks-r...
And the point about DirectX 11 and 11.1 games being more efficient and actually using less VRAM is interesting. It totally goes against what most people would say that newer games are just going to use more and more VRAM.
^That's kinda what I was thinking. What's the point of 4 GB of VRAM if it can't be used quickly enough?
On rare occasion at high resolutions, you will run out of VRAM and performance takes a dump. Most the time, simply lowering AA, or some odd setting will fix it. It is also possible in cases like Skyrim, where add on mods can use more VRAM than normal due to ultra large texture packs.