Re: Upgrading from Strix 780 OC 6g x 2 Sli, looking for advise between two cards

DragonicChiken

Prominent
Mar 29, 2017
8
0
510
Hi

I am contemplating buying a graphics, i currently have 2 Strix GTX 780 oc 6 sli together and I am undecided between the two listed below. Any feedback or official review comparing the two would be most appreciated:

ROG-STRIX-GTX1080-O8G-GAMING
https://www.asus.com/au/Graphics-Cards/ROG-STRIX-GTX1080-O8G-GAMING/

rog-strix-geforce-gtx-1080-ti
https://rog.asus.com/articles/gaming-graphics-cards/the-rog-strix-geforce-gtx-1080-ti-takes-pascal-to-the-limit/#

I am still learning and teaching myself how to best determine which would perform better for my style of play.

For your reference I prefer to play with ultra settings, generally also using the nvidia control panel to tweak the graphics further. I plan on getting a 4k monitor at a later date however the one I run now is running at 85hz (Overclocked) and resolution 2560x1440. I like the graphics to be the best quality I can to immerse myself in the games.

Is it worth upgrading from two 780's to one of the above, will it be much difference going from my sli combo to a single card for now. My main issue appears that my 2 780's sli together cannot play mass effect andromeda on the ultra settings.

Any information is highly appreciated, ad i know its a pain but if you couple explain your reasoning so i may better learn to understand what i need to look for in the future when upgrading my GPU.
Myself i would have settled on the Rog Strix GTX1080 O8G Gaming card as the better performing but again i am still learning.

I dont really have a budget as these cards cost between $1199 and $1500 in australia. Which im willing to pay if its worth the upgrade. With the idea to eventually get two and run sli.

Thank you all.
 
Solution
Cuda cores WITHIN generations are comparable like this, for reference. It's fine between a GTX 1060 and 1070 for example, but not between a GTX 970 and 1070.
Either of those cards will easily push 85fps average at 1440p, but if you want to just crank the settings to ultra at 4k and ignore everything else then you might as well get the 1080 ti. I have a 4k monitor and find that I can't really notice the difference between low AA and high AA settings, so I just always use low and have a great experience on a 980 ti. Either of these cards will be a pretty big upgrade from 2x 780s, but whether it's "worth it" is relative. If you're not getting the performance you want, it's worth it. If you really don't need any more power then it's not. The new mass effect has terrible SLI support. For what it's worth, even the 1080 ti will only push mid 40s fps-wise in ME:A at 4k ultra, just don't make sure your expectations are TOO high.
 

DragonicChiken

Prominent
Mar 29, 2017
8
0
510
It will be a while before I get 4k. By the time k do they I will most likely be running two 1080s.
Thanks for answering. So is there much difference between the two cards? Which is the better card of the two?
 
The 1080 ti is ~30% faster. Other than that there's not much difference. If it's going to be a while before you get 4k, then honestly I'd consider getting a GTX 1070 or get the GTX 1080 and resist the urge to add another card. Like I said before, SLI support is dwindling so it doesn't make as much sense anymore. In addition, the next generation of cards should be out within a year from now and the 2080 or whatever they call it will probably be faster than the 1080 ti for less money.
 

DragonicChiken

Prominent
Mar 29, 2017
8
0
510
Ok thank you.
Do you mind explaining how you figured it was 30% faster. As looking st the stats of the cards I thought the strix 1080 was the fast card or am I look at the wrong figures. The clock speeds were fast on the attic one
 
Think of it like a factory with the people as CUDA cores and the speed that they work at as the core clock. A factory with fewer workers CAN put out as many products as one with more if the workers work faster, but a factory with more workers will easily surpass it if you make them work faster too.

Roughly: Relative performance = CUDA cores * Core clock