I would like to ask you something:
I am using 3dmax and
I've been realizing that working on large scenes with more than 80 plant species and each species with hundreds of instances, if you're working with a video card with more memory in the viewport will have more performance than if you have more than just a gpu.
Ex: a gf gtx 560 for 3gb in the viewport will be faster than a gtx 580 gb 1.5, with 3GB because I can carry more elements of the scene in the viewport relieving own gpu.
Do you confirm me this?
If yes, then the gtx 560-3gb would also be faster than the quadro 4000-2GB, right?
I understand that the quadro has a superior processing and quality in the viewport as well, but what about speed-fps?
People like me who work with large projects with several towers, several areas and many trees, shrubs, ground covers have a certain slowness, even working with proxies optimized a few polygons.
I am often forced to spread the plants as bounding box for the system does not freeze and it is horrible.
One would think more video RAM equals better viewport performance but that is not always the case.
There are quite a number of factors -- hardware APIs and application accelerators being 2 -- any of which could easily have a greater impact than total RAM.
It is also dependent upon your version of 3DSM and its hardware interaction . . . . and then you have to consider the vendor of your card.
Years ago ATI kicked nVidia rear-end, then nVidia kicked ATI rear end, and now it appears ATI is again whipping nVidia.
With the level of your work a pro card is most likely in order as you need to take advantage of the application accelerators available to you . . . but even that is dependent upon the hardware API (and the current nature of the app accelerator in comparison to your 3DSM version).
23 vs 44 is quite a difference. In the second one the difference is there and remember the V4800 is a lower end piece of hardware compared to the HD4870 shader wise and the HD4870 and V4800 in this case are both 1GB versions (V4800 has 400 shaders HD4870 has 800).
Its a matter of the type of work you are doing and how the drivers and card handle the data. More memory is useful for larger data sets (large models, big textures etc), but it is a matter of how the program and the graphics card are using that data. Workstation cards drivers, hardware, and firmware are optimized by Autodesk and ATI/Nvidia to work well with Autodesks software this likely means that a Workstation card could use less memory for a given scene than a similar Consumer card because the drivers know what should be in memory and what doesn't need to be in memory.
Your best bet is to get the best Workstation card you can afford. Especially if you are working with larger models Consumer gaming cards are not going to cut it sure a newer consumer card might be faster than what you have now, but a workstation card will be even faster than that. If I had to pick between the three cards you listed I would say the Quadro 4000 2GB would likely give you the best experience.