NVIDIA and ATI: beyond performance numbers

skaertus

Distinguished
Mar 20, 2010
23
0
18,510
I am searching for a new graphics card and I am in doubt between NVIDIA and ATI. I've seen different benchmarks and, as tricky as benchmarks may be, they have given me a clue of which card is superior to the other in terms of performance. Numbers are easy to compare.

But perhaps there is more to a video card than just numbers. I've read in some places that there are not so trivial differences between ATI and NVIDIA. For instance: ATI drivers are more buggy; the image quality of NVIDIA is superior to the image quality of ATI (and vice-versa); ATI usually embraces the latest technology while NVIDIA often rebrands its old video cards; ATI is designed towards gaming and NVIDIA is designed towards general computing; one is faster in some kinds of applications than the other.

NVIDIA GeForce and ATI Radeon are really very different beasts, beginning with their architecture. Comparing raw numbers is a very simplistic approach and, at least to me, it doesn't seem to reveal all the truth behind these cards.

I am particularly interested in these non-trivial differences. I haven't owned a decent video card for quite a while right now (some years, in fact), so I cannot really tell the differences of the latest ATI and NVIDIA series.

I am looking for a high-end video card for general purposes, not just gaming (but also some gaming). I intend to use two 1920x1080 monitors to do quite a few demanding multimedia tasks. So, I would appreciate if someone told me which are the main differences between these two cards, apart from the ones I can easily see on traditional benchmarks. On which one the image quality is better? For which kind of applications is NVIDIA or ATI better? Which will probably better support GPGPU? Which offers better performance for dual monitors and high resolutions? And so on...
 
This thread will probably be closed within 24 hours so I hope you get what you need before then. The reason is much of the answers will be driven more by emotion than fact and it will degenerate into the usual bickering. The fact is, yes, it simply is a numbers game. ATI has had a history of driver issues but nVidia has succumbed to such problems also (note the recent fan issue). Both also have a habit of tweaking each new driver release specifically to fit the common benchmark scripts used by review sites.

The rebranding issue is also a non issue....is the 2010 Dodge Dakota substantially different than the 2009 ? Is Windows 7 anything more than giant bug and interface fix equivalent to Windows Vista SP3 ? The only issue is...is it faster or slower than the old version ? ...if it is, it needs a new name and whether they add a "A" to the end of the old model number or give it a new one doesn't really matter to me. OEM buyers OTOH seeking to avoid confusion about their customers getting an older version generally insist on new.

One real difference right now is that ATI's new 5xxx series cards really are abysmal at 2D graphics....this is supposed to be fixed in the 10.4 driver. If 3D and PhysX are something you are interested in, then nVidia's position here is one you have to consider. These are individual decisions and there is no right nor wrong answer.

There are other considerations ....how much power does it draw ? how much heat does it put out ? Is a 5% performance increase worth a 10% increase in price.....a 10% increase a 20% increase in price ?

Again, possibly important considerations to each purchaser but ones which you will see the "brand faithful" flip flop on depending on how the answers" fit their faith".

Simply put, look at the numbers....look at the numbers in the specific tasks you want to do on your PC and choose accordingly after weighing the other factors which might be important to you making the final decision which works best for you.
 

AMW1011

Distinguished
Jack is pretty damn close to spot-on.

Do realize that 2D is a very rare thing to use, but if you are one of the very few to suffer from 2D performance nVidia may be better right now, though ATI will fix the issue.

PhysX is also pretty useless right now and is a pretty dead tech so I wouldn't worry about it.

ATI will also have 3D, but nVidia aren't quite ready for Eyefinity.

As far as picture quality goes, they are the same.

As far as drivers, they are basically the same but nVidia tends to have slightly better release drivers, about after about a month they equal out. As Jack noted, ATI's driver problems are in the past, completely (thanks to the AMD acquisition, they are a bit more militant about the drivers than old ATI was.).

The only thing separating ATI and nVidia right now, besides performance and prices, is Eyefinity and 3D vision, which you likely won't use.
 

skaertus

Distinguished
Mar 20, 2010
23
0
18,510
Thank you, your answers are being very helpful. I sincerely hope this thread does not deteriorate into fanboy discussions (I guess nobody needs biased opinions...).

As far as I understood, it will not matter whether I choose NVIDIA or ATI, as the differences in architecture will not affect most tasks, although there might be some differences in performance depending on the specific task. I also understood that the image quality will be about the same. It looks like neither NVIDIA or ATI cards keep those small hidden secrets that reveal themselves only when we finally buy one of the cards and which make us realize "If I had known this before, I would have bought the other one...".

I looked at the article on 2D graphics, and it looks like ATI 5xxx series suffer a little bit in comparison with NVIDIA and previous generation ATI cards. I guess these cards are not optimized for 2D graphics, but how does it impact the use of the computer? Will I notice any difference when watching 1080p videos while doing other tasks? Will it affect the speed of the graphic interface if I use two 1920x1080 monitors? As I intend to use lots of "common" applications which display 2D graphics, I wonder how the video card will affect their performance in high resolution settings.

I also took a look at Eyefinity. This technology allows ATI cards to support up to 6 monitors. Does it offer any advantage over NVIDIA cards if I use only two monitors?

3D and PhysX features of NVIDIA cards also look nice, but I am not sure if I will ever make use of them.
 

AMW1011

Distinguished


Yes, the 5xxx series will play movies just fine, and the driver fix will come eventually, it will also perform great in common apps as it will be idle and basically almost unused.

As for eyefinity, its a niche market like 3d vision and is just for the few who want it.

Don't get sucked in for PhysX, its a marketting tool that gives very little and is dead tech as it has been replaced by DX11 compute shaders, ie: its a non-issue.
 

notty22

Distinguished
If you have a Nvidia card, you can play games with physx enabled. There is already one hit game , Batman that made major use of it. Others might be coming ?
Don't get sucked in for features you might never use. Eyefinity and or 3d are two , that many will never use. Physx is only a option in a configuration menu away from being 'tried'. Nvidia is still promoting it, with their next gen cards. Its not dead. Dead tech is things like hybrid crossfire.
 

AMW1011

Distinguished
I don't want to argue so I will just explain my way of thinking:

PhysX is a proprietary software that only works on nVidia GPUs. When enabled it allows the game to process physics on the GPU. It does not work on ATI cards, which is almost half of the gaming market. DX11 has something built into it called compute shaders, these allow any DX11 GPU to process anything that the CPU would normally process, like physics and AI. It has the potential to do more than PhysX and also works in any DX11 title with a DX11 card, like the future cards from both ATI and nVidia. This means it can reach a larger audience of gamers.

Some may post up links where Tom's applauds PhysX in Batman AA, but does not applaud DX11 in DiRT 2. This is a terrible comparison as PhysX has been around for years and DX11 has been around for months and PhysX has only a single title where the benefits are even remotely applaud-able, the a fore mentioned Batman AA. DX11 is also a full API version, where PhysX only does a single thing and is therefor not nearly as important.

You answer this yourself, if you were a video game developer would you focus on implementing PhysX, which is only marketable to half the market and only has a single purpose, or would you support DX11 compute shaders which should already be built into your DX11 enabled game which allows you to reach a larger market and do more than just process physics on a GPU?
 
G

Guest

Guest
Yep, DirectCompute and OpenCL (integrated in DX11) will make PhysX senseless.

http://en.wikipedia.org/wiki/DirectCompute

http://en.wikipedia.org/wiki/OpenCL

Two languages that can communicate with any DX10-11 GPU

Quote->

On March 26, 2009, at GDC 2009, AMD and Havok demonstrated the first working implementation for OpenCL accelerating Havok Cloth on AMD Radeon HD 4000 series GPU.

<-End Quote

So thats Havok Physics being GPU accelerated. Of course thers many types of physics, fluid, ... but thats just to prove it. YES, the topic should be closed lmao Depends on you, either if its easier to code CUDA or DX11 options, up to you if thats what you mean for other usage. Video coding is already very fast on both sides and thats not the factor to decide rofl Just wait until the 26th when Fermi Benchs surfaces.
For high resolutions you mean 2560x1600? If yes, I would wait for the 4GB HD 5970 running at 5870 speeds from Sapphire,that will be some #$%^*& card Rofl Now I just need a 23inch OLED @ 2560x1600 True 240Hz 1000000:1 Contrast with 2 or ~0ms latency that consumes less than 40W, I will keep dreaming, maybe it will happen a day?
 

4745454b

Titan
Moderator
A comment as far as drivers go. Cards I've used in the recent and not so recent past include the TNT2, 9600pro, 9700pro, x1800XT, and the 8800GS. This makes it 2 Nvidia cards and 3 ATI/AMD cards. Want to know how many driver issues I've had? None that I can remember. I've had odd random "windows has recovered" on both the x1800xt and the 8800GS, but nothing that would be a problem. Also from an end user stand point, there aren't really any differences. I prefer the layout of CCC to whatever Nvidia uses, but thats probably because I'm more familiar with CCC.

I try to go with whatever card will provide me with the best frame rates for as little money as possible. (or best price/performance ratio.) This currently is an AMD card from either the 4xxx or 5xxx series. The GTX cards are just to expensive for what they give. Unless you use PhysX.
 

wh3resmycar

Distinguished
gpu-accelerated havok physics surfaced way back 2005 (if i remember correctly). still nowhere to be seen in-game. hype was built-up during 2nd quarter 2008 when the 4800's were coming up, still none.

any news that a certain game will use open-CL physics? none. and if a game company did announce something like that this year, it'll take 2-3 years development time, which will defeat the purpose of buying a 5800 solely because you'd be needing open-CL physics.

the same way if you're looking @ dx11. if i have the money right now, i'd be buying a 5800 not because of the dx11 feature set, but because they're freakin fast. end of story.

imho, a gpu purchase is only good if you have a positive price/performance ratio then power efficiency. i can never put a gtx285 in my rig due to psu constraints, but a 5850 would do nicely since it consumes awfully similar amount of power with a gtx260.
 
I use to have issues with ATI cards, major issues, but you cannot really say that ATI of the past is the same company today. They have gotten much better with their drivers for one. It use to be true that ATI cards had slightly better image quality than nVidia cards, but that's not much of an issue now. Instead, it's really more of an opinion of how each company tries to "optimize" image quality.

NVidia is certainly trying to take the lead on the GPGPU front especially with the evolution of CUDA. They have been much more persistent with their efforts compared to their rival ATI, as ATI kind of dropped it and only recently tried to get the ball rolling again with their own GPGPU efforts. That may be of interest to you if you're a programmer with a specific application in mind, but as a consumer it's like "big whoop". I believe that efforts to promote GPGPU on more open standards, like Direct Compute, will eventually win out against more closed standards, like CUDA.

As for Open CL physics, there has been some news about it actually

http://www.amd.com/us/press-releases/Pages/amd-announces-new-levels-of-realism-2009sept30.aspx

But yes it will be at least a year before we see it tacked on to a game, and 2-3 years before we see a game developed from the beginning with bullet physics in mind.
 
Basically for normal computer usage that the average person might encounter almost any video card you might buy will be more than capable. Right now ATI cards tend to give significantly better performance for the money, are more advanced in terms of features and are much more power efficient. The only Nvidia cards above $100 that are worth considering are the soon to be released GTX 470 & 480. If those are in your price range then just chill for the moment. Very soon there will be an insane number of articles comparing those cards in detail vs the HD5850/70 so you can just read them yourself and make up your own mind. If $300-350+ is beyond what you were looking to spend then just get an HD5770 as it is a very good value for the money.
 

skaertus

Distinguished
Mar 20, 2010
23
0
18,510
Thank you, your feedback has been very helpful.

I've learned that the next versions of both Internet Explorer and Mozilla Firefox will implement GPU acceleration, but I'm yet to find out if it will use CUDA, OpenCL, DirectCompute, or whatever. As GPGPU technologies evolve, it seems to me video cards will become more and more important to general computing. However, I'm yet to find out how NVIDIA GeForce and ATI Radeon handle those GPGPU applications, as I haven't seen any test so far. GeForce will be the obvious choice for CUDA applications, but DirectCompute and OpenCL will work with both cards. And I still don't know how much power those GPGPU applications will actually use; will a Radeon 5970 be so much faster than a 5750, for instance, in these kinds of applications? Does anybody have any clue on this?
 

AMW1011

Distinguished


GPU acceleration is a long way from being useful.

That said, there are two companies, both will work in one way or another. OpenCL is the most likely to succeed. Honestly, I wouldn't worry about it.
 

4745454b

Titan
Moderator
I doubt IE and FF will implement something that only one companies cards will be able to handle. If they used CUDA, Intel and AMD GPUs won't be able to use it. Huge win for Nvidia, but no one in their right mind would shut out such a large group.

As for how much card you need I have no idea. If we are talking only web browsing you don't need much for that. Its already pretty fast. If they are able to offload some things to the GPU, then it won't take much of one to make it faster.
 


Is a 10% performance increase worth a 50% increase in price?

That's the question you're really looking at with Nvidia cards. They absolutely do not compete in terms of price. Otherwise, everything you said I've got to agree with.
 

LisaS

Distinguished
Mar 22, 2010
1
0
18,510
Hi, Jack. My name is Lisa Stapleton, and I'm doing some research on the topic of ATI vs. NVIDIA, and your comments were the most cogent, understandable and just plain humorous (in the right places, of course) that I've seen. I was wondering if you might have about 15 minutes to be interviewed for my research, which is for BlueShiftIdeas, a San Francisco-based firm that does boutique research? My email is Removed. I could call you, so it will be on our dime. Honestly, there's only one real benefit to you of participating: You get a copy of the report, which is usually kind of interesting.

Sincerely,
Lisa Stapleton

PS This part made me laugh so hard my Diet Coke almost made me choke. That will teach me to drink and read at the same time:

The rebranding issue is also a non issue....is the 2010 Dodge Dakota substantially different than the 2009 ? Is Windows 7 anything more than giant bug and interface fix equivalent to Windows Vista SP3 ? The only issue is...is it faster or slower than the old version ? ...if it is, it needs a new name and whether they add a "A" to the end of the old model number or give it a new one doesn't really matter to me. OEM buyers OTOH seeking to avoid confusion about their customers getting an older version generally insist on new.