lostgamer_03 :
TXAA is supported by 3 games or more at the moment. I know Assasins Creed 3, Borderlands 2 and Batman: Arkham city does and a lot of more games are coming up with it, it's fairly new so ofc it won't be supported in a lot of games. Physx are supportd in 10-20 games, here I'm a little unsure.
You only show your opinion on how the Nvidia technologies look. What if I say they look awesome? Because I actually think that, and just by we think something, doesn't mean it's true. FXAA does improve visuals, TXAA does just as well, physx adds a lot of visuals in the games, go look at youtube for an instance. If you think they suck, bad for you, but someone else might appreciate them.
Watt requirments are wrong? ARE WRONG? they are provided by AMD themselves, and I think they do know more about themselves than you do, mr. Expert.
In all the reviews and comparisions I've read, the difference is about 5-7% and again, I do believe in people with a neutral account more than you.
I own an AMD GPU and a Nvidia GPU myself. I do not care what he chooses, but don't try to mess him up with your fanboy arguments.
This is going to be a very long response, but I swear, all of it is important.
I'm not making fanboy arguments and I'm not trying to under rate Nvidia's technologies.
TXAA is an incredible technology. However, all it does is offer MSAA-like quality without taking as much of a performance hit. AMD already takes far less of a performance hit with MSAA than Nvidia and it really just about equalizes the performance hit for quality improvement with Nvidia TXAA versus AMD MSAA, so it's not truly an advantage. FXAA really is garbage, but not because of it being a Nvidia technology. AMD has something even worse, it is the first version of MLAA. FXAA is garbage, but MLAA was absolute trash. MLAA 2.0 is better, but still has nothing on even the lowest level of MSAA.
Like you said, TXAA is poorly supported only because it's new. However, it's not really that new anymore and although support will undoubtedly improve, it is doing so far too slowly. I don't hold that against TXAA as a technology, but it does mean that it's not even relevant for most gaming situations. Having TXAA is not an advantage over AMD, but lacking it most certainly is a disadvantage in many common situations where AMD's affinity for MSAA works to great effect in curing the jagginess issue of displays that don't have a 4K resolution or thereabouts.
My beef with FXAA is that although it improves jaggies, it makes the screen look more like a foggy window. FXAA is more of a blur effect than true AA. Most gamers seem to agree with that from what I've heard and read.
PhysX is a great technology. There's no doubt in my mind about that. However, like TXAA, it is not well-supported by most games and seems to be dying off. This may change with games using the Unreal 4 engine, but that's just wishful thinking for now. Most games that use PhysX properly (most games that support it do not make good use of it) are either older DX9/10 based games or look like they are. The first real title that I expect to utilize it properly and extensively in several years is the next release of Metro (Last Light is the name IIRC) and I truly hope that it lives up to my expectations. PhysX doesn't suck, the current games that use it suck. That truly saddens me as it leaves PhysX an incredibly underutilized technology.
However, Nvidia did kinda ruin it in a few ways. Most importantly is that although a GPU runs it better than a CPU does now, that is greatly because of the extremely outdated code used by its CPU version. A fully modern CPU implementation from Nvidia could be so effective that not only could it revitalize the need for a top-end CPU, not only would it increase the need for highly threaded performance over single/dual/lightly threaded performance, but it would also leave Nvidia's GPUs with more room for other work. Sure, it'd mean that PhysX would run on AMD systems much more easily, but getting better support for TXAA would let Nvidia still have something to argue for. It would give the entire industry the benefit of more reason to support PhysX in games (especially with excellent utilization) while giving Nvidia an ethical boost in the community for improving the industry as a whole with superior and more open PhysX.
The other major hurdle for games that properly utilize PhysX is that most of the current ones are horribly coded. For example, the Batman games (most recently BatMan:AC)... Those ridiculous are programming messes. Horrible frame rate consistency, poor performance to quality ratio, among the most driver incompatibilities despite Nvidia's and AMD's best efforts compared to most other fairly modern games, of does the list go on. This is something that I hope does not happen to the next release of Metro. Metro 2033 is pretty good about coding (it's also one of the most well-threaded games already) AFAIK and that shows good chances for the next release being good about this too IMO.
Onto other things. Direct Compute and OpenCL features (especially some very nice advanced lighting features) run great on AMD's cards compared to on Nvidia's best consumer cards, especially with AMD's Radeon 79xx cards that just chew through the GPGPU acceleration. Like PhysX and TXAA., these aren't well-supported by many games, but some of the newest and best games do support them. Sleeping Dogs is a great example IMO.
About the wattage requirements, yes, they are entirely wrong. Wattage recommendations from AMD and Nvidia (as well as many other companies for much other hardware) are never accurate whatsoever because the wattage of a PSU is not important these days. What really matters are a few quality factors and the +12V power delivery. For example, a crap 550W PSU with only around 20-25 amps of +12V rated delivery is probably going to struggle with even a Radeon 7770 or GTX 650 Ti, yet a good 450W model such as the Antec VP0-450 will take a Radeon 7850 without any trouble at all. Even better, using proper adapters, it will take a Radeon 7870 and even a Radeon 7950 without any trouble at all, assuming you're not using an otherwise ridiculously high-power consumption system.
Something else worth noting, jsut in case you're not aware of it: TDP does not equal power consumption. For example, the Radeon 6970 has a 250W TDP and the GTX 580 has a 244W TDP. Despite this, the 6970 generally uses less power than the 580. Another example: The GTX 680 has a 195W TDP and the Radeon 7950 has a 200W TDP. The 7950, again, uses less power.
Furthermore, you're not taking current drivers for Nvidia and AMD into your performance comparison.
I have had many Nvidia and AMD cards over the years. My last personally owned Nvidia card for real gaming was a GTX 560 Ti 1GB that was replaced with a Radeon 7850 2GB shortly after the 560 Ti had a failure. What I said was not from the view of some AMD fan.