Okay, so I'm trying to figure out the differences between ATI and NVIDIA, what makes them fast, what to look for, and MAINLY, how to tell how fast a card will be/where it will stand just by looking at its specifications.
So, to start, I must compare ATI to NVIDIA.
ATI/NVIDIA cards have the following specifications (USING (1) 5870 vs. (2) GTX 480):
Core clock: Something they both have (850mhz/700mhz) ATI has a much higher core clock. What does this mean for performance?
Stream Processors: (1600 Stream Processing Units/480 Processing Cores) Okay, so ATI seems to have more "Stream Processors" however they have different names, as in Stream Processing Units (SPU) and Processing Cores (PC). What's the difference between these two things? And why might less of PC outperform more of SPU.
Shader Clock: 1400 MHZ - seems to be unique to Nvidia. Not sure what its meant for, or what advantage it has, or if ATI has a shader clock. How does this effect performance?
Effective Memory clock: Also both have this (1200mhz [4.8Gbps]/~3700mhz) Now, Nvidia seems to dominate here in mhz, not sure if these are different in terms of definition, and what their respective roles are. Also, why does ATI seems to have a Gbps after their clock?
Memory Size: (1GB/1536GB) All cards have this, but is this just used to store incoming graphical information so it can process it/send it out quickly?
Memory Interface: (256-bit/384-bit) Not sure what the difference would be. All I've figured out is that it's 1/4 of the memory size. Does higher result in a better performance?
Memory Type: GDDR5 I know the newer RAM is faster, but in what circumstance might GDDR3 cards be faster than GDDR5?
Don't feel obligated to answer them all, if you know one thing, answer that and I'll work my way through it. I'm having a hard time finding a good article explaining it so I figured I'd try my luck here.
If anyone knows an article that will explain this to me, please link it.
Also, please explain to me why the 480 GTX is faster than the 5870 is terms of their specifications so I might be able to understand what makes a card faster.
You can't really compare clock speeds between different architectures.
For example, ATi's core clock may run at 850MHz, a much higher clock than nVidia's core clock, however, ATi's core clock is the same speed at which the shaders run at, which is a lot lower than what nVidia's shaders run at. As for memory, speed, not sure, but larger memory size allows more information to be stored, which is useful at higher resolutions, and IIRC, anti-aliasing too. However, some cards are too weak to be able to use all that memory, such as the 9500 GT, where some versions have 1GB DDR2 memory. With memory sizes, GDDR3 cards can be faster than GDDR5, it's all about the architecture. For example, the GTX 275 uses GDDR3 memory, but is faster than the HD 4870 which uses GDDR5 memory.