Sign in with
Sign up | Sign in
Your question

Silly question

Last response: in Graphics & Displays
December 29, 2011 6:04:22 AM

I have a very basic question. How do I know what video card is better than another?
I know that higher clock speeds for core and memory are better, and that 128 bit vs. 256 bit
And I know that this also depends on the intended usages as for what is better.

I know I could just look up test results that have been run but that doesn’t teach me what to look for.

If anyone could clear this up for me it would be a big help.


More about : silly question

a c 92 U Graphics card
December 29, 2011 6:07:29 AM

a easy way to check is to look at the 3dmark score they put up inside a review, however if 2 scores are close, you should look at the games you want to play and how it performs inside reviews. also make sure the 3dmark version is the same for the scores.
Related resources

Best solution

December 29, 2011 1:44:26 PM

There are more things to look at like the stream processors, the raw number of those is going to have a direct impact on performance. So AMD often calls these "Pixel Pipelines" while Nvidia refers to them as "CUDA Cores" Sometimes "Shaders" etc

Then you have the GPU Core clock speed
Memory Quantity, Memory Type (GDDR3/5 etc),

So Qty of these Stream Processors/Pixel Pipelines/Cuda Cores and then the frequesncy they run at + Core GPU Clock speed and toss in memory is simple but there are more things at play well beyond me. Hope this helps.
December 29, 2011 5:33:39 PM

so no one knows of a way other than looking up a test results? not the answer i was hopeing for. '

thanks jgutz, you have been the most help thus far.
a c 1380 U Graphics card
December 29, 2011 6:42:11 PM

k_blox said:
so no one knows of a way other than looking up a test results? not the answer i was hopeing for. '

thanks jgutz, you have been the most help thus far.

All those things apply for comparing products in the same family/generation. As an example there are cards out that have GDDR3 that are much faster than some of the cards that have GDDR5!
December 29, 2011 6:58:22 PM

Best answer selected by k_blox.
a b U Graphics card
December 29, 2011 7:23:35 PM

It seems in the latest years both AMD and nVidia agree on certain elements regarding model number:
1st number = generation (AMD vs nVidia are 4vs2 and 5vs4, 6vs5). AMD is about to release 7xxx, while I saw no rumour about nVidia's 6xx (if they'll call it that).
2nd and following numbers = relative performance and intended usage. 2nd number explained:
0-4(nv) 0-5(AMD) = budget (light gaming+video, but silent). Not realy for gaming.
5-6(nv) 6-7*(AMD) = mainstream (capable of native resolution gaming with low-to-medium details, some with additional power connector)
7-9(nv) 8-9*(AMD) = high-performance, most with 2-slot cooling

* - AMD seems to use the mainstream (6-7) chip for 68xx, but until know this is the only exception.
The difference between the 3 categories of cards are the cips, these being the 3 major chips architectures for each generation. For cards with the same chip the performance does not differ more than 20-30%, due to frequencies and disabled units. But an overclocked slow-chip will never exceed a faster stock chip. Frequencies are comparable only for same chips (or at least for same categories).

Oh, and about RAM, don't bother with the "more-RAM" models. Currently there are 512MB and 1GB mainstream cards, but buying the 1GB is waste of money, as the extra memory will be virtually unused. Or should I say, when you do need the extra memory is with: large textures, anti-aliasing and high-resolution. But the latter 2 require performance 1st. As for the large textures, they are mostly used in high-resolution (so again performance 1st) or maybe non-gaming usage (3D design). I don't even know why they bother. At those prices better get the high-performance cards.