Why don't graphics cards allow users to add VRAM?

Hello,
A while back, there was a card called the Matrox Mystique G-200 (also called the G200) that allowed a user to add VRAM. My questions are: Why isn't that a thing today? What part of modern technology prevents this (other than the inconvenience of having to take the cooler off)?

Here's a picture of the card which clearly has a slot for additional memory.
800px-Matrox_Mystique_G200.jpg


I can think of four possibilities that could potentially be the reasons why this isn't a thing, but I think that with some good old ingenuity that manufacturers could have gotten around these four issues.

1) Power delivery to the memory chips. Couldn't this be solved by running dedicated leads to slots for additional memory?
2) Connecting the chips to the GPU. Couldn't this also be solved by running dedicated leads to slots for additional memory?
3) Possible issues with the card's BIOS recognizing the memory. Couldn't this be solved by introducing standards for speed, latency, etc like we have on motherboards and then requiring that graphics card BIOS's would include support for said standards?
4) Possible issues with Windows and other OS drivers. I don't know much about how drivers are written, so I don't have a solution off the top of my head.

Thanks in advance to all that offer their wisdom.

-Darren
 
Solution
I think there is a couple reasons. The most obvious is economic, they want you to buy a new card instead of just upgrading the one you have.

The other likely has to do with the GPU itself, nVidia and AMD probably figure that by the time the amount of VRAM becomes an issue, the GPU itself is likely out of date and too weak to actually use the additional VRAM. It's why the amount of VRAM on lower end cards is considered a marketing gimmick. The GPUs in low end cards would never be able to use all the VRAM, but it gets people to buy the card. Like the nVidia GT 740 4gb isn't strong enough to run a resolution that would need 4gb of VRAM to run. The only place where adding VRAM would be useful is in the top end, like AMD's VEGA or nVidia's...

DEADLY9996

Distinguished
Jul 8, 2013
239
0
18,710
It's like phones with an SD Slot, it negates the need to get that 64gb version that costs an extra £100, when you can just insert a Micro SD card in that has 128GB!


When they have the tech, it doesn't cost alot for them to add more, but that doesn't mean it can't cost you a whole lot for it. :)
 
The main reason I can think of is that graphics cards only have a certain amount of BUS width which only allows for a certain number of VRAM chips. Building a card without all of its BUS filled would add unnecessary cost and R&D to a product that 99% of people wouldn't even consider upgrading and significantly reduce the effective memory speed for anyone that doesn't choose to upgrade. It's a lot easier to just double the capacity of the VRAM chips and charge a premium for higher VRAM cards.
 

gillhooley

Distinguished
Aug 1, 2006
297
0
18,810
1. Extra Cost
2. They want you to upgrade why would they spend money to make it easier for you to delay that.
3. Small market for Ram upgrade to GPU, not many people would ever do this, so no upside and GPU ram addons would be crazy expensive IMO

PS I had that card!!!
 
I think there is a couple reasons. The most obvious is economic, they want you to buy a new card instead of just upgrading the one you have.

The other likely has to do with the GPU itself, nVidia and AMD probably figure that by the time the amount of VRAM becomes an issue, the GPU itself is likely out of date and too weak to actually use the additional VRAM. It's why the amount of VRAM on lower end cards is considered a marketing gimmick. The GPUs in low end cards would never be able to use all the VRAM, but it gets people to buy the card. Like the nVidia GT 740 4gb isn't strong enough to run a resolution that would need 4gb of VRAM to run. The only place where adding VRAM would be useful is in the top end, like AMD's VEGA or nVidia's 1070 and 1080, but again, by the time you are maxing out the VRAM on a single card, the GPU itself is running out of power.

Note: This is in reference to single cards and gaming, not Crossfire/SLI or productivity applications.
 
Solution

Eximo

Titan
Ambassador
Well, most memory is no longer in a socket/pin package. And then you would need to have a whole bunch of sockets on a GPU, which would add significant expense as the PCB would have to grow significantly. Just imagine having to take a card apart to get to the memory.

I think it is more that developers target Moore's law (effectively, not quite true any longer). By the time you would need to add memory to a GPU, you should be replacing it for one with more processing power. Unless you want to go down the route of socketed GPUs?

Lets see:
1MB 2D GPU + Voodoo 2 8MB 3D Accelerator
Voodoo 3 16MB 2D/3D
Geforce 2 64MB
Geforce 4 (I can't recall which one I had)
Geforce FX5200 128MB
Geforce 6600 256MB
Geforce 8800 GTS 640MB
Geforce GTX285 1GB
Geforce GTX580 1.5GB
Geforce GTX980 4GB
Geforce GTX1080 8GB

Except for that 580, and I honestly didn't wait that long between it and the 285, I pretty much double by GPU memory every time I buy.

AMD does have a card with 'expandable' memory. The Radeon Pro SSG. They added a pair of NVMe SSDs to act as additional VRAM for large compute jobs. Not really a gaming need though.

Intel is doing something similar with Optane, a large memory expansion capability for CPUs. Though not really targeted at the consumer once again.
 
I think it's a business decision. Ask yourself how many people would really upgrade their memory on their videocards. Most people need an entirely new card when their old one no longer does the job. It's rare that someone will say "You know what? This 750 Ti would be perfect if it only had 8gb of vram.".
 
Back in the day when VRAM could be expanded, they didn't require special cooling. Now days, they need cooling. Not only do they need to add extra sockets, but they'd need to have a way to add cooling, without the need to dismantle the card. They'd have to have the VRAM slots outside of the normal cooling solution, with their own cooling system built in.
 


Well, I listened to you. Am I in trouble or something? I mean, your name says not to.

Anyways, if it was between a 750 ti and a 760 and memory was easily affordable, I might just upgrade the memory on the 750 ti.

However, you can find a good card called 760 ti from old OEM systems on sites like Craigslist for less than $40. The 760 ti is an OEM card only found in prebuilt systems and is just a rebranded 670 which is a more powerful card than the 750 ti and the 760.

Again, if I could just upgrade the 760 ti to 3GB, I'd be as happy as could be except for my massive CPU bottleneck. The card is in one of my older machines with an i3 540 in it. :D
 


Let's ask the manufactuers!

Oh wait, Nvidia doesn't list a direct contact method and Asus never answers theirs. Never mind I guess...
 


ASUS does a AMA from time to time, you could try to get a question in there. Asking via email will just get you marketing gibberish as a response. If this were something they were working on, they wouldn't say it in an email, but they might if cornered in a live AMA.
 

tazmo8448

Distinguished
Dec 23, 2011
232
2
18,695


Planned obsolescence just like in the the old days when new cars would roll out...speaking of cars...can you tell the difference these days from one to another?

 


If you know your brands, the exterior designs give them away.

You're right that the engineering that goes into the modern cars you see on the streets has become relatively the same across many manufacturers. Aside from supercars that feature unique engineering (very few anymore), it seems most function on almost the exact same technologies.
 

Eximo

Titan
Ambassador
If you know your brands, you know which cars share chassis and swap engines and the like.

Rebadging and re-branding are even worse in the automotive industry.

Though I blame modern safety standards and car shows for the fairly common styling cues that all the major OEMs adopt.