Still musing about a build for triple monitor gaming at high resolution, and concerned about the long term viability of any such build with limited Vram (especially with the limited vram in Kepler's leaked specs)
Then I ran across this post on another forum when researching via google:
"Doesn't matter what's on the card. If the video card doesn't physically have it, it'll use your ram"
"Of course. But you do realise that this is a new function that Kepler and Maxwell offer, yes? Because up until that happens Nvidia cards currently run out of vram and cache from the paging file on your hard drive."
Is anyone familiar with this supposed feature on Kepler cards? Theoretically, would running out of vram and being forced to page from your system memory still act like a bottleneck, or would it be relatively seamless? I'm really hoping for the latter, but somehow I don't think it is likely...
"I believe I can present an explanation; or at least one that explains some of the exhibited issues...
The framebuffer reserves an amount of VRAM for each frame as it is being drawn/set to monitor, plus room for however many frames you have set to render ahead. Because it stores this as raw pixel information, it's not exactly small. You have 8-bit data for each of RGB, and on modern cards alpha as well. When you include overhead, such as lookup tables and other 'housekeeping' info for the card, each frame starts to get a bit on the large side. This limits how much you can fit in the rest of it, as it's essentially not there for other purposes - textures, shader code, etc.
This means that you'll start seeing what AMD/ATi termed "Hypermemory" and nVidia terms "TurboCache" swapping to system RAM when the GPU runs out of texture room. And compared to VRAM, which can transfer at rates easily topping 150GB/s on higher-end cards, system RAM is slow. Almost glacially so. And you won't get the raw speed of system RAM, either, as it has to talk through both a DirectX layer, and a driver layer, and a kernel layer, before getting to what the card needs. Add on the latency inherent, and when calling info for the GPU out of system RAM (PCI-E bus latency, PCI-E controller, memory controller, RAM, then all the way back again) the card needs to still need it once it's arrived - because it takes a long, long time.
Now, if your situation is anything similar to mine when I went investigating this a while ago (over a year, now!) if you monitored system RAM usage, you'd probably see it absolutely full to bursting when the huge framerate dives happen.
This is when the caching to system RAM fails, and it falls back to a pagefile on your HDD/SSD. Which, regardless of whether you're running the latest Sandforce SSD or a 4200RPM HDD, is so epically slow, it's like watching Grand Prix Continental Drift occurring in real time. Whenever I saw huge fps drops when testing, it was always when I'd pushed my GPU too far in terms of how much I was asking it to load, and it was paging first to system RAM, then to a pagefile. I confirmed this by stuffing another 6GB of RAM into my system and seeing if it still did it at the same points. It did not. Don't get me wrong - framerates were still appalling, but because it wasn't hitting up the HDD for texture memory, they didn't drop into the low-single-digits, and instead remained in the teens.
You'll possibly also see some odd behaviour at times - where, if you're asking the GPU to load too much in terms of textures, it doesn't even try to fit them into VRAM, and just goes straight to system RAM... in these scenarios, you can be seeing what amounts to a random VRAM usage number in monitoring software, and high system RAM usage levels, but also at the same time terrible framerates.
At least, this is the hypothesis I've formed after significant testing; I have no way of actually confirming whether when system RAM is full Windows is intelligent enough to load other stuff that is less access-speed dependent into the pagefile first, but it seems to tally up with all the evidence I've gathered."