instead of buying a whole new graphics card what about making the gpu upgradeable

smooth_cannibal

Distinguished
Oct 14, 2013
26
0
18,530
I tried searching for this and all that would come up with was how to upgrade a graphics card.

i don't know if this has been tried by a company or the idea has at least come up.

applying the same concept of a mobo/cpu to your graphics card. like going from an i3 to an i5

you buy a gpu board with1 or 2 gb etc, with an NVidia or AMD socket. then buy the gpu and install like a cpu on a motherboard. if you bought a lower end one you could switch it out with a higher end one.

probably too many headaches but i thought it was kind of a cool idea. if anything it would cut down on waste from all the pcb.

of course that i think about this more if apus really take off it will kind of be the same thing
 
Solution

well you cant just make VRMs pump out 300W if they are designed to only deliver 70W via software/bios settings, you'd just blow them up...
Even if you stayed within the same TDP envelope it would lead to same kind of situation as with cpus and mobos, everytime there is a node shrink (lower and more finely conrolled voltages) or new type of memory introduced it would also force you to change the board as well when you upgrade the chip.
It would only work within quite small...

Kari

Splendid
wouldn't work well at all.
The wild variations of power usage, mem bandwidth etc going from the low end to the high end gpus would force them to make whole bunch of boards, or if they made just one type of board that would be fitting for the top of the line models, the cheap low end wouldn't be so cheap anymore but still slow...
So in the end you'd still most likely upgrade both parts... (and the socket would be an extra bit of waste at that point)
 

smooth_cannibal

Distinguished
Oct 14, 2013
26
0
18,530
wouldn't power variables be easily governed by software/bios settings. as far as maemory bandwidth that could be standardized with different tiers as it already is.

I don't know maybe it is stupid. if anything it might be a good concept on an enthusiast level.
 

Kari

Splendid

well you cant just make VRMs pump out 300W if they are designed to only deliver 70W via software/bios settings, you'd just blow them up...
Even if you stayed within the same TDP envelope it would lead to same kind of situation as with cpus and mobos, everytime there is a node shrink (lower and more finely conrolled voltages) or new type of memory introduced it would also force you to change the board as well when you upgrade the chip.
It would only work within quite small group of chips so there wouldn't be much upgrading choices available... While I cant speak for everyone out there but I have never gotten a new cpu without at the same time getting a new mobo as well (the old ones were never compatible) and I've had like 6 or 7 rigs over the years starting with a 386sx in the ancient past. :)
 
Solution