Sign in with
Sign up | Sign in
Your question

Why does ATI have such good specs compared to Nvidia?

Last response: in Graphics & Displays
Share
a b U Graphics card
November 26, 2008 3:14:58 AM

This is probably a common question, but I couldn't find an answer. Why are ATI's video card specs so much better than Nvidia's, but not any better? There are ATI cards with up to 800 stream processors, but Nvidia only has up to 240 stream processors. ATI cards that can have up to GDDR5 3600Mhz, but Nvidia cards only have up to GDDR3 2400Mhz. ATI also seems to usually have more vram on their cards. Why are ATI cards roughly equal to Nvidia cards?
November 26, 2008 3:17:22 AM

Specs dont make the card, the technology each card uses is more the diffrence, then the numbers you see.
November 26, 2008 3:30:18 AM

Same reason Pentium 4s weren't / aren't the fastest CPUs in the world.

Technical specifications are meaningless when you're comparing two entirely different architectures.
Related resources
a b U Graphics card
a b Î Nvidia
November 26, 2008 4:46:23 AM

Dougx1317 said:
This is probably a common question...


Yes it is. and it's easily researchable use google and put in ATi spu architecture vs nVidia SPU and you get a bazillion such discussions.
Also you're mistaken, both companies have the same maximum memory 1GB per GPU.
November 26, 2008 5:01:35 AM

TheGreatGrapeApe said:
Yes it is. and it's easily researchable use google and put in ATi spu architecture vs nVidia SPU and you get a bazillion such discussions.
Also you're mistaken, both companies have the same maximum memory 1GB per GPU.

Actually, there's one anomoly:
Powercolor HD 4850 2GB
Just when you thought the 8500GT 1GB was rediculous...
November 26, 2008 5:02:31 AM

Dougx1317 said:
There are ATI cards with up to 800 stream processors, but Nvidia only has up to 240 stream processors.


I suppose they aren't the same thing inside, even though they are both called "stream processors". Think about it like, 1 ATI's stream processor is not as powerful as 1 NVIDIA's stream processor.
They are just two different things..
a b U Graphics card
a b Î Nvidia
November 26, 2008 5:21:46 AM

KyleSTL said:
Actually, there's one anomoly:
Powercolor HD 4850 2GB
Just when you thought the 8500GT 1GB was rediculous...


Well that would be like the GF9800GTS-2GB anomalie, it's not the standard card, in the same way the Quadro FX5800 4GB isn't.

Anywhoo, for the sake of this discussion as to the differences between the two top gaming card they are 1GB each, but there are of coure the silly other things out there too, but for the HD4870 and GTX280, they're equal in size, just very different in implementation.


a b U Graphics card
November 26, 2008 4:03:17 PM

Quote:
Also you're mistaken, both companies have the same maximum memory


I meant that ATI cards generally have a higher memory to price ratio. I just bought an HD2600XT with 512MB of GDDR3 for only $25 AR. What Nvidia card can you get with 512MB of GDDR3 for anywhere near that price?

And why doesn't Nvidia use GDDR4 or GDDR5 memory?
November 26, 2008 4:09:02 PM

due to the higher latencies of ddr4 you'll find its less than useless
November 26, 2008 4:11:53 PM

Again faster does not necessarily mean better, look at Dual core vs. Core2duo, a dual core with 2ghz per core is not as "fast" as an E6300 clocked at 1.8ghz per core its all to do with architecture, they memory bandwidth may be higher in the Nvidia cards, 256mb of 512bit memory will beat the crap out of 256mb of 256bit memory, even if the 256bit memory is clocked at near 2x the speed of the 512bit. in the computer world speed is not everything.

Hope this helps!
a b U Graphics card
a b Î Nvidia
November 26, 2008 11:24:34 PM

Dougx1317 said:

And why doesn't Nvidia use GDDR4 or GDDR5 memory?


Well first of all you need to take up die space to support it, and there's not enough room on a G200 die for the TMDS & RAMDACs, so GDDR4/5 support was likely not of high importance once they decided on the 512bit route.

GDDR4/5 isn't important on it's own, and it's not anymore latency prone than the equivalent GDDR3, it's a question of use and needs.

GDDR4 was expensive, and GDDR5 was unproven when they designed their G200, ATi had pushed the memory envelope since the R9800Pro DDRII days, and nV's last push was the failed FX5800.

There are reasons to not use GDDR4, but GDDR5 would've been a good choice, however they may not have known or expected that.

With GDDR3 deadending around the 1200Mhz mark, the future of GDDR5 is the way to go, but right now it's about bandwidth not just raw speed.
If you think of how it works, the GDDR5 on the HD4870 actually runs at slower speeds than the GDDR3 on many GTX260 (900 vs 1000 Mhz) but it's how one is quad pumped not just DDR, and the other is 448bit vs 256bit that play a big role in the equation.

Specs on their own mean little, it's how they add up and in what situation that matters most.

That's why it's best to look at the initial launches of the R600 and G80 and see how the SPUs compare (not just composition/#, but that one is clocked much faster), and then take note that you also need other components, like TMUs and ROPs as well a memory interface properties to make up performance in an app for a specific card.

There is no easy answer other than to reasearch the architecture.

Also think of this, nV may outperform per SPU or some other measure, but per mm of silicon the ATi cards are way out front. So what's important to a producer and consumer (if mm = yield = cost/price)?
November 27, 2008 12:05:04 AM

any news of the tech behind the next ddr standard ddr6 ape
November 28, 2008 2:10:32 PM

I doubt there'll be ddr4 in desktops seeing theres no huge gain in performance compared to ddr3 so ddr5 is the perfect solution for the motherboards
November 28, 2008 2:42:53 PM

cyber_jockey said:
I doubt there'll be ddr4 in desktops seeing theres no huge gain in performance compared to ddr3 so ddr5 is the perfect solution for the motherboards


that's my thinking as well
November 28, 2008 3:35:51 PM

Quote:
*bangs head off of desk*

Quote:
You mean GDDR6.

We haven't yet reached DDR4


Read it again and think about your posts.


This

GDDRX != DDRX
a b U Graphics card
November 28, 2008 4:03:05 PM

Gee, I wonder what the G is?
November 28, 2008 7:23:27 PM

right now it would appear tighter latencies affect performance better than clock speed, for memories.
!