Sign in with
Sign up | Sign in
Your question
Solved

GTX 680 vs GTX 570 SLI

Last response: in Graphics & Displays
Share
May 9, 2012 4:27:22 AM

Should I wait for a 680 or go with a 570 SLI. I perfer nvidia, but I am open to other solutions


APPROXIMATE PURCHASE DATE: Within a 2 weeks. (Flexible. I can wait for a 680)

USAGE FROM MOST TO LEAST IMPORTANT: Gaming (Modern games @ Ultra), movies, photoshop CS5.1,

CURRENT GPU AND POWER SUPPLY: Neither currently purchased.

OTHER RELEVANT SYSTEM SPECS: CPU 3570K (OC @4.5Ghz)

PREFERRED WEBSITE(S) FOR PARTS: (newegg.ca, amazon.com or website that accepts a Canadian billing adress (not newegg.com)

OVERCLOCKING: Yes

SLI: Maybe

MONITOR RESOLUTION: 2160x3840 (two 1080p monitors)

Thanks in advance

More about : gtx 680 gtx 570 sli

Best solution

a c 253 U Graphics card
May 9, 2012 4:37:25 AM

If you can wait then it would be much better to have the 680 , even though the two 570's would be a great setup if you start out with one 680 there's always the possablity of adding another later on. Wouldn't that be great!
Share
a c 175 U Graphics card
May 9, 2012 1:14:32 PM

I would rather wait for the 680 to be in stock and buy it.
m
0
l
Related resources
a b U Graphics card
May 9, 2012 1:34:57 PM

Yes 680 certainly, most benchmarks shows that it outperforms a 570 sli, (not by much) and also, with it you will have all the new features like TXAA, GPU boost, i think that worths the wait (I am also waiting).

And like @inzone spoke, in future you can even go with 680 sli, a much better option.

about your resolution:
2160x3840 (two 1080p monitors)

how is that resolution possible? with 2x 1080p?

i can see 3840x1080 and even 2160x1920 but 2160x3840 How?;

m
0
l
a c 88 U Graphics card
May 9, 2012 2:22:59 PM

Definately consider getting the 680, I got mine and absolutely love it , went from 560ti SLI to a 680 and gotta say it performs much smoother and also faster, you won't be dissapointed.
Think of the power , heat and noise you will aslo reduce going with the 680, the pros are really worth it.
m
0
l
May 9, 2012 2:36:24 PM

+1 GTX 680 is a future proof beast.
m
0
l
a c 175 U Graphics card
May 9, 2012 2:51:14 PM

ricardois said:
Yes 680 certainly, most benchmarks shows that it outperforms a 570 sli, (not by much) and also, with it you will have all the new features like TXAA, GPU boost, i think that worths the wait (I am also waiting).

And like @inzone spoke, in future you can even go with 680 sli, a much better option.

about your resolution:
2160x3840 (two 1080p monitors)

how is that resolution possible? with 2x 1080p?

i can see 3840x1080 and even 2160x1920 but 2160x3840 How?;

Well I think 2x2 1920x1080 = 2160x3840. But with 680 as far as I know you can only do 3x1+1
m
0
l
a c 87 U Graphics card
May 9, 2012 2:56:05 PM

2560x1600 is closer to two times 1080p. 2160x3840 would be a 4K resolution and has 4 times more pixels than 1080p.

Two times 1080p (with two monitors) is not 2160x3840, it's either 1920x3840 or 2160x1920. 2160x3840 would be 4 1080p monitors in a 2x2 square with two monitors, side-by-side, above another two, side-by-side.

Also, I vote for the 7970 over the 680. It's in stock all the time and it's VRAM capacity isn't a limiting factor on it's performance, nor will it be. The 680's 2GB is a limiting factor in some games at certain resolutions, quality settings, and AA/AF right now, so how bad will it's VRAM capacity bottleneck be in one or two years? If you must have a 680, then I'd wait for a 4GB per GPU model. It'll last longer, especially with a resolution of 4MP or more.

Also, there are maybe one or two games where there is a noticeable difference between the 680 and the 7970. Metro 2033 and Batman are the only two that come to mind, and the 7970 wins in Metro 2033.
m
0
l
a c 87 U Graphics card
May 9, 2012 4:33:07 PM

monsta said:
In the reviews and benches the 680 with 2gb of vram still beats the 7970 with 3gb in many games, it only trails the 7970 by less than 2 fps in Metro @ 5760x1080, not a significant difference.
http://www.tomshardware.com/reviews/geforce-gtx-680-sli...


It shows weakness is what I said, not that it's no good right now. Also, at 5760x1080, it's a 5FPS difference (not a small difference when it's 40FPS compared to 35FPS) if you use AA. Unless you don't like AA, there's no reason to not include that in the measurements and all the reason to include it. I also said that it would probably take a few years for the 680's memory problems to be a serious problem. Problems it has right now is that it can't handle AA as well as the 7970 in some games at some resolutions and quality settings in some games.

The 680 has a VRAM capacity bottleneck (it's VRAM bandwidth is a little low for it's GPU performance, but it's acceptable and it can't make performance drop like a rock into unplayable like a capacity problem can). Some games, such as Metro 2033 and BF3, can push past 1GB of VRAM usage at 1080p. Going to higher resolutions does not increase VRAM usage linearly from this point (or else the 680 would be screwed so bad that Nvidia wouldn't have even released a 2GB version and would have only done 4GB), so the 680 is okay at high resolutions in some games if you don't overload the AA and quality settings. Hard OCP did a test on 5760x1080 and had to lower AA in some games (even done to just FXAA or no AA some of the time) because the 680s don't have enough VRAM.

The 680 clearly has a slightly more powerful GPU for 32 bit math (gaming right now is almost purely 32 bit math). However, the 7970 has almost 6 times more 64 bit math performance than the 680 (not a joke, it's a little over 5.5 times more). Games are starting to incorporate more 64 bit math than ever, although that trend has not come close to peaking yet. Civ 5 is a start of it, but it only has a little (not so much that the 680 can't handle it). Even the 580 has more 64 bit math performance than the 680 (almost double).

Kepler, in it's current form, is best skipped if you want your computer to last longer at maxed out settings in games without an upgrade. The only two reasons that the 680 beats the 7970 in Civ 5 is because Civ 5 doesn't use a whole lot of 64 bit math and because the 680's 64 bit math is done on cores separate from the regular, 32 bit cores (Kepler cores in the consumer cards can only do 32 bit math. They are optimized for it, hence their advantage in 32 bit math over GCN and all other GPU architectures). The 64 bit cores are different hardware, so using them does not slow down 32 bit math much at all (it only slows it down a little because of the 680's low memory bandwidth).

The next generation of Kepler might have more 64 bit cores per card, alleviating this problem, and more memory capacity and bandwidth. However, in it's current iteration, it is not the best way to go, in my opinion. The 670 is a better choice than the 680 solely because it is slower, but has the same memory capacity, so it's capacity will not be as much of a bottleneck. However, it still has the low 64 bit math problem. In fact, even the 7850 and 7870 beat the 680 in 64 bit math (although due to them, like Kepler, using non-compute oriented GPUs, they don't come close to the 7950 and 7970).

The Kepler Quadro and Tesla cards will have incredible 64 bit math, but Nvidia decided that they don't want consumers to have it because they want you to pay thousands of dollars for overpriced professional hardware to get it.

Next time, you might want to make sure that you're not arguing with someone who has some computer engineering knowledge when you want to argue about computer hardware.
m
0
l
a c 253 U Graphics card
May 9, 2012 4:33:55 PM

The thing with these benchmarks and one card winning over another card is not a clear cut reason to go with one card or another. If you look at the benchmarks and see what the difference is it then becomes a personal preference because 3 or 5 fps is not a good enough win to make you choose a card over another card. It would be a different story if the margin of the win was by 25 to 30 fps but not by 2 , 3 or 5 , to me that's a draw and then it's which brand do you prefer. Which brand will give better driver support and what games do you play because it's well known that each brand has thier best performance in the games thier drivers are optimized for.
m
0
l
May 9, 2012 4:47:09 PM

What's wrong with just one GTX 570? I have one with a 2700k and plays all the games I have, including Metro 2033.
m
0
l
a c 88 U Graphics card
May 9, 2012 4:58:29 PM

blazorthon said:
It shows weakness is what I said, not that it's no good right now. Also, at 5760x1080, it's a 5FPS difference (not a small difference when it's 40FPS compared to 35FPS) if you use AA. Unless you don't like AA, there's no reason to not include that in the measurements and all the reason to include it. Not including it, for any reason (your reason looks like you want to down-play the problems of the 680) is practically fanboyism. I also said that it would probably take a few years for the 680's memory problems to be a serious problem. Problems it has right now is that it can't handle AA as well as the 7970 in some games at some resolutions and quality settings in some games.

The 680 has a VRAM capacity bottleneck (it's VRAM bandwidth is a little low for it's GPU performance, but it's acceptable and it can't make performance drop like a rock into unplayable like a capacity problem can). Some games, such as Metro 2033 and BF3, can push past 1GB of VRAM usage at 1080p. Going to higher resolutions does not increase VRAM usage linearly from this point (or else the 680 would be screwed so bad that Nvidia wouldn't have even released a 2GB version and would have only done 4GB), so the 680 is okay at high resolutions in some games if you don't overload the AA and quality settings. Hard OCP did a test on 5760x1080 and had to lower AA in some games (even done to just FXAA or no AA some of the time) because the 680s don't have enough VRAM.

The 680 clearly has a slightly more powerful GPU for 32 bit math (gaming right now is almost purely 32 bit math). However, the 7970 has almost 6 times more 64 bit math performance than the 680 (not a joke, it's a little over 5.5 times more). Games are starting to incorporate more 64 bit math than ever, although that trend has not come close to peaking yet. Civ 5 is a start of it, but it only has a little (not so much that the 680 can't handle it). Even the 580 has more 64 bit math performance than the 680 (almost double).

Kepler, in it's current form, is best skipped if you want your computer to last longer at maxed out settings in games without an upgrade. The only two reasons that the 680 beats the 7970 in Civ 5 is because Civ 5 doesn't use a whole lot of 64 bit math and because the 680's 64 bit math is done on cores separate from the regular, 32 bit cores (Kepler cores in the consumer cards can only do 32 bit math. They are optimized for it, hence their advantage in 32 bit math over GCN and all other GPU architectures). The 64 bit cores are different hardware, so using them does not slow down 32 bit math much at all (it only slows it down a little because of the 680's low memory bandwidth).

The next generation of Kepler might have more 64 bit cores per card, alleviating this problem, and more memory capacity and bandwidth. However, in it's current iteration, it is not the best way to go, in my opinion. The 670 is a better choice than the 680 solely because it is slower, but has the same memory capacity, so it's capacity will not be as much of a bottleneck. However, it still has the low 64 bit math problem. In fact, even the 7850 and 7870 beat the 680 in 64 bit math (although due to them, like Kepler, using non-compute oriented GPUs, they don't come close to the 7950 and 7970).

The Kepler Quadro and Tesla cards will have incredible 64 bit math, but Nvidia decided that they don't want consumers to have it because they want you to pay thousands of dollars for overpriced professional hardware to get it.

Next time, you might want to make sure that you're not arguing with someone who has some computer engineering knowledge when you want to argue about computer hardware.



Don't use the fanboi card with me cos I simply pointed out something in a review, your reaction is intense and full of emotion.....take a deep breath and calm down its only a card for freaks sake
m
0
l
May 9, 2012 4:59:17 PM

@Blazerthon -- Interesting info you provide. I've read much of the same stuff about how ati aimed more for direct compute muscle and nVidia simply aimed for FPS optimization. Well, when these gpus are sold to gamers, FPS pretty much wins the battle hands down. I don't care that it can do 64bit math better than the competition. I'm going to care about pure raw FPS. But really I want to echo what inzone stated, they are so close, neck and neck that you prob can't tell a difference between <5-10 FPS. His post hit in on the head imo.

@OP, can you explain your monitor setup a little more cleary? You stated a certain res, but then in parenthesis you say 2 1080 monitors. Those two don't add up. People are commenting that it is a 2 by 2 array, but then why wouldn't you just say 4 1080 monitors? Also, regardless of whether it is 2 or 4 monitors, how do you game on either. Usually for multi-monitor setups its and odd # of panels so that your cental point doesn't line up with bezels. And if you're only gaming on one monitor, both of the setups you posted are unnecessary imo.
m
0
l
May 9, 2012 5:01:21 PM

one-shot said:
What's wrong with just one GTX 570? I have one with a 2700k and plays all the games I have, including Metro 2033.



Yea, but you don't have 18 monitors and want 2389047289347 FPS either.

;) 
m
0
l
a c 87 U Graphics card
May 9, 2012 5:01:21 PM

Well, a lot of people can tell the difference between 35FPS and 40FPS. It's not about the FS difference in numbers, like 5FPS, it's about the difference in percentage. If it was 54FPS versus 59FPS, then yes, it's not as noticeable. However, 35FPS to 40FPS is much more noticeable. Well, I can see it clearly and I know many people who can. It also depends on how good the display is (a bad display won't show as great of a difference).

Sure, it's a personal difference, but think about the longevity. If the 680 shows problems right now, how bad will they be when even more intensive games come out that take advantage of 64 bit math more and use more memory in the process? The 680 2GB has a big problem in that. Too little memory is the worst possible bottleneck. Nothing drops performance faster than it and the only way to fix it is to decrease settings, whereas other bottlenecks such as insufficient GPU processing power or memory bandwidth can be solved, or at least alleviated, through overclocking. It's like the Radeon 5970 or Radeon 6870X2 right now. They have good GPU power, but too little memory to make great use of it a year or two after they came out. They both have 1GB of VRAM and that can be a problem even at 1080p in some games. Going over 1080p in anything remotely intense? Not a chance on them.

Even more, Kepler loses it's power efficiency advantage when overclocked because the GCN Radeons don't use much more power at all when they are overclocked, but the Kepler cards use a good deal more.
m
0
l
a c 87 U Graphics card
May 9, 2012 5:05:58 PM

monsta said:
Don't use the fanboi card with me cos I simply pointed out something in a review, your reaction is intense and full of emotion.....take a deep breath and calm down its only a card for freaks sake


Sorry, that was going to far. I'll edit it out of my post.
m
0
l
a b U Graphics card
May 9, 2012 5:11:24 PM

Quote:
400/500/600 series have all the new features.


by what is know TXAA and gpu boost is only 6xx series features.

And about people recommending 7970 i know it is also a good card, and have pros and cons, just like 680 do, but for example, i am going with a 680 because what i use is 3D Vision and i already have 3D screen 3D Glasses, so i would be wasting money going with amd solution right now and also i don't think of increasing the resolution too soon, 1080p right now...

So id really depends on what @OP is looking for.
m
0
l
a c 87 U Graphics card
May 9, 2012 5:16:04 PM

rdzona said:
@Blazerthon -- Interesting info you provide. I've read much of the same stuff about how ati aimed more for direct compute muscle and nVidia simply aimed for FPS optimization. Well, when these gpus are sold to gamers, FPS pretty much wins the battle hands down. I don't care that it can do 64bit math better than the competition. I'm going to care about pure raw FPS. But really I want to echo what inzone stated, they are so close, neck and neck that you prob can't tell a difference between <5-10 FPS. His post hit in on the head imo.

@OP, can you explain your monitor setup a little more cleary? You stated a certain res, but then in parenthesis you say 2 1080 monitors. Those two don't add up. People are commenting that it is a 2 by 2 array, but then why wouldn't you just say 4 1080 monitors? Also, regardless of whether it is 2 or 4 monitors, how do you game on either. Usually for multi-monitor setups its and odd # of panels so that your cental point doesn't line up with bezels. And if you're only gaming on one monitor, both of the setups you posted are unnecessary imo.


The problem is that games are starting to use more and more 64 bit math. If Kepler had more 64 bit cores, then it wouldn't be a problem. In fact, Kepler's approach is better, in a way. It allows the 64 bit cores to idle when not in use, increasing power efficiency when not playing a game that relies on 64 bit math or some other 64 bit reliant job. AMD has all of their cores able to do 64 bit math (albeit not as fast as their 32 bit math) and is optimized for it while still having respectable 32 bit performance. AMD did a good job on that, but Nvidia's method is better, despite them simply doing a worse job of it so that AMD can despite their inferior method.

The thing is that the 7900 Radeons are balanced in what they can do. The other ones, such as the 7800 and 7700, are not optimized for 64 bit throughput and only beat Kepler because Nvidia made sure that the consumer Kepler cards are worse than consumer Fermi cards. It's kinda funny because had Nvidia simply increased die size to like the Tahiti (which is still a small die, it's smaller than the Cayman in the Radeon 6900 cards and that is a mere 2/3 or so of the huge dies that Nvidia used in their previous cards such as the GTX 480, 580, 280, etc) and put more 64 bit cores on it, it wouldn't have hindered 32 bit performance nor power efficiency. What Nvidia optimized for isn't FPS, it's 32 bit math. Current games are more reliant on 32 bit math than 64 bit, so it means greater power efficiency with 32 bit math. Nvidia could have easily had good enough 64 bit math and memory capacity to compete against AMD in all workloads except pure compute math, so it's weird that Nvidia refuses to meet or beat AMD. It's not like they aren't capable of it, they just didn't want to.
m
0
l
a c 87 U Graphics card
May 9, 2012 5:24:57 PM

Quote:
GPU boost is a 600 series thing.
All other features are 400/500/600.

"Utilizing Kepler's superior texture processing performance, NVIDIA is also introducing a new technique called TXAA. TXAA utilizes custom high-quality film style AA resolve mixed with hardware anti-aliasing and optional use of temporal components for increased image quality. While TXAA is being introduced with the GTX 680, drivers will enable TXAA support for previous generation 400 and 500 series GPUs. Two modes of TXAA are available: TXAA 1, which produces edge quality superior to 8x MSAA at the cost of 2x MSAA, while TXAA 2 produces unprecedented levels of quality at the cost of 4x MSAA."

Source:
http://www.hitechlegion.com/reviews/video-cards/18730-n...


TXAA is still MIA. It could solve the 680's AA/AF problem, but we don't know how it affects the VRAM capacity. If it is easier on the VRAM than MSAA while also being better performing, then the 680 has it's VRAM problem alleviated and I change my stance on it during the short term (long term, we still have the 64 bit math problem, but that's probably not going to hurt until another two or three years). Of course, we don't know if that will happen and even if it does, when. If it doesn't happen within a month or longer, then you have to wait that long just to get proper performance in many situations. You, using only one 680, might be fine for now, even with moderate AA/AF. However, in the future when it's time for an upgrade (maybe throwing in a second 680 for SLI), if TXAA doesn't come out before hand, then you'll have a serious problem.
m
0
l
a b U Graphics card
May 9, 2012 5:28:53 PM

blazorthon said:
TXAA is still MIA. It could solve the 680's AA/AF problem, but we don't know how it affects the VRAM capacity. If it is easier on the VRAM than MSAA while also being better performing, then the 680 has it's VRAM problem alleviated and I change my stance on it during the short term. Of course, we don't know if that will happen and even if it does, when. If it doesn't happen within a month or longer, then you have to wait that long just to get proper performance in many situations.


Probably in the future MSAA will be left behind, you see, there is no need to do geometric antialiasing if you can get a "close" effect with image antialiasing like FXAA and TXAA, yes it is not a 680 only feature, it can be used with older series with 301.24 driver but this is simply great and for realtime rendering it is the future you can be sure of that.
m
0
l
a c 87 U Graphics card
May 9, 2012 5:41:38 PM

ricardois said:
Probably in the future MSAA will be left behind, you see, there is no need to do geometric antialiasing if you can get a "close" effect with image antialiasing like FXAA and TXAA, yes it is not a 680 only feature, it can be used with older series with 301.24 driver but this is simply great and for realtime rendering it is the future you can be sure of that.


Like I said, TXAA is still MIA. Also, AMD has some work being done on similarly good types of AA. Who knows? Maybe we won't see either type for another year or two. We just don't know for sure and launch dates are rarely when they are first said to be. If TXAA comes out soon and it is much lighter on the memory (it probably is), then the 680's biggest problem is all but gone and I'll recommend it if OP doesn't mind an upgrade in three years or so. That's really not a bad life time for a graphics card.
m
0
l
a c 253 U Graphics card
May 9, 2012 6:16:01 PM

In this industry three years is a lifetime and there are a lot of people that upgrade at the next release which does make no sense but some people just want the latest and greatest. I would like to see the 4gb version of the 680 come out and see what the difference is with that card. I like that fact that Nvidia came out with the 680 and 2gb of v-ram which for them is an upgrade but there was mention of a 4gb version and it's not the 690.
I might be interested in the 4gb 680 Classified Hydro Copper 3 from Evga. I have trouble holding onto a video card for more than a year.
m
0
l
May 9, 2012 6:16:25 PM

blazorthon said:
Well, a lot of people can tell the difference between 35FPS and 40FPS. It's not about the FS difference in numbers, like 5FPS, it's about the difference in percentage. If it was 54FPS versus 59FPS, then yes, it's not as noticeable. However, 35FPS to 40FPS is much more noticeable. Well, I can see it clearly and I know many people who can. It also depends on how good the display is (a bad display won't show as great of a difference).

Sure, it's a personal difference, but think about the longevity. If the 680 shows problems right now, how bad will they be when even more intensive games come out that take advantage of 64 bit math more and use more memory in the process? The 680 2GB has a big problem in that. Too little memory is the worst possible bottleneck. Nothing drops performance faster than it and the only way to fix it is to decrease settings, whereas other bottlenecks such as insufficient GPU processing power or memory bandwidth can be solved, or at least alleviated, through overclocking. It's like the Radeon 5970 or Radeon 6870X2 right now. They have good GPU power, but too little memory to make great use of it a year or two after they came out. They both have 1GB of VRAM and that can be a problem even at 1080p in some games. Going over 1080p in anything remotely intense? Not a chance on them.

Even more, Kepler loses it's power efficiency advantage when overclocked because the GCN Radeons don't use much more power at all when they are overclocked, but the Kepler cards use a good deal more.



Ok way to try and take my FPS example and try to make it sound irrelavent. I'll use your numbers, although you didn't provide them with the same scaling so I'll fix that. 35fps out of 40fps is 12.5% loss and 55fps out of 60fps is a 8.3% loss. I agree when you say its not just about the whole number but rather percentage of say maximum, which I never claimed in the first place. Anyway, were talking about a difference of ~4%. I hardly doubt you could perceive this. Yes this gap will widen if we go with lower and lower FPS numbers. But this post was about GTX 680 or 570 sli. I hardly doubt were talking about framerates in the teens with these setups. Put the emotion aside and stop trying to create arguments.
m
0
l
a c 87 U Graphics card
May 9, 2012 6:41:08 PM

inzone said:
In this industry three years is a lifetime and there are a lot of people that upgrade at the next release which does make no sense but some people just want the latest and greatest. I would like to see the 4gb version of the 680 come out and see what the difference is with that card. I like that fact that Nvidia came out with the 680 and 2gb of v-ram which for them is an upgrade but there was mention of a 4gb version and it's not the 690.
I might be interested in the 4gb 680 Classified Hydro Copper 3 from Evga. I have trouble holding onto a video card for more than a year.


Increased VRAM capacity only effects performance if it's a bottleneck. It doesn't increase performance like increased clock frequency or increased memory bandwidth. For example, a 680 2GB and a 680 4GB would have more or less identical performance until the 680 2GB runs out of VRAM. Unless you want to do something that will max out 2GB of VRAM, there will be no benefit from having a 680 4GB for less than a year instead of the 2GB model.
m
0
l
a c 87 U Graphics card
May 9, 2012 6:44:08 PM

rdzona said:
Ok way to try and take my FPS example and try to make it sound irrelavent. I'll use your numbers, although you didn't provide them with the same scaling so I'll fix that. 35fps out of 40fps is 12.5% loss and 55fps out of 60fps is a 8.3% loss. I agree when you say its not just about the whole number but rather percentage of say maximum, which I never claimed in the first place. Anyway, were talking about a difference of ~4%. I hardly doubt you could perceive this. Yes this gap will widen if we go with lower and lower FPS numbers. But this post was about GTX 680 or 570 sli. I hardly doubt were talking about framerates in the teens with these setups. Put the emotion aside and stop trying to create arguments.


I said that I can see the difference between 35FPS and 40FPS. Any difference much lower than that, not so much. I'm not being emotional, just making sure that every thing is known. I've clearly stated that if the 680 has TXAA or a is 4GB model, then it has my recommendation too if it's not going to be used for more than a few years. The problem is that there are no 4GB models yet and TXAA is nowhere to be seen. The 680 can still be a good choice and it's VRAM isn't so low that it will bottleneck a 4MP resolution like dual 1080p, but it will be a problem if new games come out that use more memory than current games.
m
0
l
May 9, 2012 8:16:15 PM

ricardois said:
Yes 680 certainly, most benchmarks shows that it outperforms a 570 sli, (not by much) and also, with it you will have all the new features like TXAA, GPU boost, i think that worths the wait (I am also waiting).

And like @inzone spoke, in future you can even go with 680 sli, a much better option.

about your resolution:
2160x3840 (two 1080p monitors)

how is that resolution possible? with 2x 1080p?

i can see 3840x1080 and even 2160x1920 but 2160x3840 How?;



Sorry about that... I had a brain fart... I meant 1080x3840
m
0
l
May 9, 2012 10:53:30 PM

Best answer selected by opalarrow.
m
0
l
a c 253 U Graphics card
May 10, 2012 12:11:33 AM

The benefit you get from more v-ram comes when you are using a 30" monitor at 2560x1600 , the more then 1gb the better at that resolution.
m
0
l
a c 87 U Graphics card
May 10, 2012 12:38:27 AM

monsta said:
See vram means nothing with these cards , have a look at the 680 2Gb to the 4GB

http://www.guru3d.com/article/palit-geforce-gtx-680-4gb...


Guru3D is known to be Nvidia biased and the resolutions in those tests don't go beyond 4MP. I've already said that 2GB of VRAM is not a big problem right now until you go over 4MP and that the serious problems at 4MP won't be until new games come out. Oh, but if you want to disprove me with things that I've already said, go ahead and try. It doesn't work.

It's also ineresting to see that this review lacks the HD 4000 i7s in the encoding benchmark. They just happen to be about twice as fast as HD 3000, putting them at about 33% faster than the 680 in encoding. It's not just against AMD that Guru3D biases against to make Nvidia look better.
m
0
l
a c 87 U Graphics card
May 10, 2012 12:43:03 AM

inzone said:
The benefit you get from more v-ram comes when you are using a 30" monitor at 2560x1600 , the more then 1gb the better at that resolution.


No. Current games need you to go above 4MP (2560x1600 is a 4MP resolution) before 2GB of VRAM capacity becomes a problem. Also, once you have enough VRAM for a workload, having more will make next to zero difference. The 680 2GB slightly beats the 7970 (has 3GB) in most games at or below 4MP because of this. It isn't until you go to 5MP or 6MP (triple 1080p) where the 680's VRAM capacity becomes a problem.

New, more VRAM consuming games will change that. New games always come out and the more intensive ones use more memory than the last intensive games. It's that simple. If 5MP and 6MP are limiting for the 680 right now (they are), then how much more intensive do games need to be to choke the 680 even at 4MP resolutions such as 2560x1600 and dual 1080p? Less than 50% more intensive. That's not even a huge jump for games. Even the GTX 580 3GB could beat the 680 2GB in some games if TXAA doesn't ease the memory load on the 680 2GB.
m
0
l
a b U Graphics card
May 10, 2012 12:07:30 PM

Well on that review you can see already that:
"The 4GB -- Realistically there was not one game that we tested that could benefit from the two extra GB's of graphics memory. Even at 2560x1600 (which is a massive 4 Mpixels resolution) there was just no measurable difference."

That is the main point of why they released the 2GB version... also they were thinking in the future, if this non-geometric anti-aliasing solutions fit in, not even in high resolution the gpu will need so much memory. of course it would lost to a system with a 580 with 3gb right now if the resolution is the bottleneck, but by thinking on the future they saw that the price extra for adding 3~4gb for the huge price added since with good coded games and fastest solutions like decreasing antialiasing you really don't need that.

Increasing the hardware size isn't always the solutions, decreasing the software demands is always the cheapest and smartest solution. That is normally the difference between a good coded game that run with good graphics even with mid-end gpus, and another game that can't run very well even at a 13k computer.
m
0
l
a c 87 U Graphics card
May 10, 2012 1:23:50 PM

Well, if you have a 680 and don't mind turning the AA down just so the 2GB of VRAM doesn't get overloaded, that's your choice. Most people don't like turning down settings, especially if the GPU can handle it and it's just because the memory can't. I suppose it is smart to decrease picture quality if you don't want to pay for it.
m
0
l
a c 88 U Graphics card
May 10, 2012 1:25:54 PM

You dont need to turn it down, it handles it just fine.
m
0
l
a b U Graphics card
May 10, 2012 1:28:43 PM

Like i have said it is not a question of TURNING AA down, but it is a question of removing AA and using FXAA or TXAA. you will probably notice games each time more including fxaa integrated, and lowering MSAA possibilities. since they can do a good enough job, for a much cheaper price on the memory...
m
0
l
a c 88 U Graphics card
May 10, 2012 1:31:37 PM

Exactly, the new FXAA and TXAA have eleviated the overhead that AA taxes on the memory, so you arent lowering it you are using a different method.
m
0
l
a c 87 U Graphics card
May 10, 2012 1:39:09 PM

FXAA is far inferior to MSAA in picture quality and TXAA is both an MIA feature and untested, last I checked.
m
0
l
a b U Graphics card
May 10, 2012 1:58:26 PM

... I'm not saying it is visually a better solution, but MSAA is calculated on object geometry, you need much memory to do that and the effect of course is better, and FXAA is on the image itself that is why it is is much faster but can't work on the same precision, what i am saying is that, the memory usage is getting too high nowdays and that is not efficient, they should check for faster technologies just like FXAA that isn't better than MSAA but it is much faster, and uses much less memory, for realtime rendering (gaming) this is an awesome solution and like i have said will probably be improved a little and will take place over MSAA that will be left behind... just some games will need more than 2gb nowdays on high resolutions and what if those games were a little more improved on the memory management, that problem would not happen.

Maybe in the future games will start to use even less memory, and having 4gb will probably not be used, of course that is what i think, maybe it get even worst on implementations and will require even more dedicated memory on the future, but what they are trying with FXAA is that, improving games performance and lowering hardware requirements without lowering quality.
m
0
l
a c 88 U Graphics card
May 10, 2012 2:00:37 PM

Far inferior is kinda dramatic.LOL
m
0
l
a c 87 U Graphics card
May 10, 2012 2:09:17 PM

FXAA and such will only slow down the trend to more memory. Like all other parts, more is going to be more important as time goes on. Far inferior is not kinda dramatic. It is very noticeable.
m
0
l
a b U Graphics card
May 10, 2012 2:21:25 PM

blazorthon said:
FXAA and such will only slow down the trend to more memory. Like all other parts, more is going to be more important as time goes on. Far inferior is not kinda dramatic. It is very noticeable.


Only the future will tell us ^^, but what is better use a car with a strong engine with hexagonal wheels or a normal engine with round wheels?

Method implementations tend to be improved to decrease the system requirements, i don't discard the more memory in future, but for now at least, flash memory is just too expensive and solutions like FXAA will start to be used much more frequently.
m
0
l
a c 87 U Graphics card
May 10, 2012 2:23:51 PM

ricardois said:
Only the future will tell us ^^, but what is better use a car with a strong engine with hexagonal wheels or a normal engine with round wheels?

Method implementations tend to be improved to decrease the system requirements, i don't discard the more memory in future, but for now at least, flash memory is just too expensive and solutions like FXAA will start to be used much more frequently.


What does this discussion have to do with Flash memory?
m
0
l
a b U Graphics card
May 10, 2012 2:32:28 PM

blazorthon said:
What does this discussion have to do with Flash memory?


Sorry, i mean VRAM...
m
0
l
May 10, 2012 2:46:00 PM

ricardois said:
Only the future will tell us ^^, but what is better use a car with a strong engine with hexagonal wheels or a normal engine with round wheels?


Well it could depend on what kind of surface you are driving on. :D 

http://en.wikipedia.org/wiki/File:Rolling-Square.gif

m
0
l
!