Wondering about why video card have cheaped out memory capacities

I'm at work and I overheard my coworkers talking about how they don't want to spend money on an old laptop, so I thought about how i didn't want to buy a new battery for my old laptop, then i remembered why I thought my laptop was crappy, because it was a gaming laptop from a few years ago and the video card had a crummy 128mb of ram. Basically it could run games at full settings with great fps at 800x600, or possibly even 1024x768, but died at any higher resolutions, even with all settings lowered to nothing.

Anywho the main question is, why do manufacturers come out with awesome video cards with low memory? When ever ppl talk about making a computer better in general they tell a person to get more ram, it's like the foundation to solid performance.

My guess would be cost, however I'm wondering, is it possible that increasing the density of ddr memory also increases heat?

I can understand clock speeds, because of heat, and possibly the compactness of transistors costing money or whatever. However if the cost wasn't extreme for capacity of memory, addig more memory to any graphics card, would greatly increase productivity, by allowing smoother graphics at higher resolutions, which also increases quality of games by a significant margin.

One example I can remember for this, is when my friend told me F.E.A.R looked better at 1600x1200, with high settings, than 1280x1024 at max settings with AA.

Just trying to understand the logic behind cheaper graphics cards. Isn't more memory always better? ;).
8 answers Last reply
More about wondering video card cheaped memory capacities
  1. Not without the bandwidth. Videocard have limited memory bandwidth making adding a lot of memory to a cheap card useless.
  2. The main reason was the cost of GDDR3 back then. Remember, the 7800 was one of the first cards I can recall with a 512MB model; the 8000 series made over 512MB of onboard RAM mainstream.

    As for laptops, anything which reduces heat or power is considered a good thing; so laptop parts will always be weaker than their desktop counterparts.

    Personally, I fail to see much diffrence between 1024x768 and higher resolutions, outside of the lack of a stable framerate...As far as i'm concerned, the best settings are the settings where everything is on and there is no gameplay lag...but thats just me.
  3. I'd say it's mostly a cost issue. Less RAM onboard = cheaper to make/sell. Cheaper to sell = more sales, most likely. Plus, a powerful video card, even with restricted memory, is good up to certian screen resolutions.

    I took advantage of that with my last video card that I purchases (an x1900XT), which normally has 512MB of RAM, and, at the time, was close to $300. I got a 256MB version with the same memory/core clock speeds for a little less than $200. I ran that until it died, and it did GREAT. (About a month ago).
  4. really?! I guess the resolution is up to opinion and if that was no longer an issue, than I wouldn't have a problem, because clock speeds and technologies are plenty high enough to run any game well on a laptop.

    My question to the people running resolution at 1024x768, have you ever played a game for an extended period of time at 1600x1200 or better resolution? Also from a productive stand point, the difference in work space from 1024 to 1600 is pretty huge, so being able to run 3ds max or blender, or premier or whatever at that resolution with smooth real time rendering would be very convinient.

    As for what causes the lack of memory capacity, would you guys rule out heat, or does that possibly cause a factor? I know clock speeds ain memory can increase heat, however does increased capacity alone contribute to that heat gain. Maybe denser chips makes heat dispersification worst?

    Also anyone know the general price differences between like 512mb of ddr4 and 1gb of ddr4 now a days, or the price of 256mb and 512mb of ddr3 back 3 years ago?

    EDIT: I didn't catch timmeh's post, 'cause it was small =P:
    Not without the bandwidth. Videocard have limited memory bandwidth making adding a lot of memory to a cheap card useless.

    first, just to get this out of the way, your wiki link leads to an article about human memory >.>. Their are articles on ram, ddr3, and ddr4 that would have been better suited. This doesn't change your point thou.

    as far as I know bandwidth has nothing to do with the amount of memory. memory capacity is used to store data, and buffers. The memory clock speed, and cpu clock speed of a graphics card is dependant on the bandwidth, however.

    So my point is more that if you have more memory you can store more in a buffer and it takes a lot to store HD frames in a buffer. Also the more memory you have the more frames can be pre-processed, incase the game demands more than the gpu can processor for a few moments, and at that if you have insufficient memory you will notice lag, where if you had more memory, you're buffer would become low, but the cpu would catch up before the buffer is empty. Btw is their a term for this? It sounds like something that would be defined =P.
  5. Well in a desktop the problem's already been mentioned. More graphics memory isn't going to help you if some other part of the system is bottlnecking it (and this could be anything from the CPU to the PCI-E bus). Most of the time a game won't use more than 512 mb anyway. It's only when you get into resolutions like 1920x1440 with 8x AA and 8x AF that you'll want more, and again only if the rest of the system can keep up.

    This problem is usually amplified in a laptop, where the rest of the system delivers even less performance. That and the extra heat that you have to deal with. Denser memory modules means you have to get more heat away from a smaller area, in an already tight laptop case.

    Cost is a bit of an issue, but look at the 4870. Brand new card with GDDR3 memory and it's cheap compared to the top of the line cards in past generations. I wouldn't say cost is the limiting factor.
  6. Matching up a GPU to the appropriate amount/type of memory isn't always a given... there are laptops out there with ATI Radeon 3650's sporting 1 GB of video RAM. That makes ABSOLUTELY no sense... that makes about as much sense as equipping that GPU with 64 MB of RAM. One way is just wasteful and the other cripples the performance. It's really a matter of balance.
  7. so is it only theoretical that the more memory you have, the more of a buffer you can create, therefor lowering the number the sudden falls in fps?

    'Cause theoretically if you have 5gb of video memory you could store ~20 frames of information in memory, so that if your clock dropped your fps down to 20 for 2 seconds your actual output could be 30fps because of the buffer.

    I'll go further into this when I get home, my supervisor wants to close up shop =x.
  8. Quote:
    'Cause theoretically if you have 5gb of video memory you could store ~20 frames of information in memory, so that if your clock dropped your fps down to 20 for 2 seconds your actual output could be 30fps because of the buffer.

    Yeah but I don't see how that would work. Your GPU would have to render several frames ahead of what's going on, and in a game that's not practical since you're constantly changing the rendering environment every time you move or shoot a weapon or what have you. That and the fact that your GPU needs to work several times as hard to fill up that buffer, so unless you're running at well over 60 fps anyway (like say 200 fps) it's not going to work. Here's the basics of what happens to render a scene:

    1) Render environment gets evaluated (CPU decides what's visible on the screen)
    2) Relevent textures, models and lighting data get sent to the graphics card and stored in graphics memory
    3) Graphics card constructs the scene and renders it, stores the result in memory
    4) GPU applies all the various pixel shaders and what have you, stores the new result in memory
    6) GPU calculates all the anti aliasing and anisotropic filtering for the scene, stores the result in memory
    7) GPU places the finished scene in the buffer for the monitor to pick up

    A lot of this stuff happens in parallel but you get the idea. The finished frame itself takes up almost no room in graphics memory, almost all of it is reserved for textures and modifying them for things like AA and AF.

    Bigger resolutions mean there's more textures on the screen at once (also means your computer has to feed more textures from the hard drive or system RAM to your video card, and that's where the bottlenecks can happen), which means you need more memory space to store the textures. AA works by taking textures and blowing them up to several times their size, then shrinking them down again, so that takes a lot of memory space at higher AA levels. AF works kind of the same way.
Ask a new question

Read More

Graphics Cards Laptops Graphics