future proof card that can max out games at 1080p(budget wise)

Solution


I agree that 6GB is more than what is necessary but I would still want more than 3GB even at 1080p. If Nvidia sold the 1060 in 4GB and 8GB variants like AMD does on the 480 I would recommend saving money and going with the 4GB model. When it's 3GB and 6GB I personally would spend the extra $45 (going off Amazon's current US pricing for the 3GB and 6GB Gigabyte Windforce 1060s).

To me 4GB seems like a good VRAM sweet spot for 1080p gaming in 2016. It puts you on par with some of the more popular GPUs from last generation like the 960, 970, 980, 380, Fury, and older 290...
"futreproof" items would have to be made of "unobtainium" :) ... no guarantees. And no card exists today that gets 60 fps in all games at that price range. Your two choices would be the 1060 6GB and the RX 480. Main differences are:

-The 1060 has a much higher overclock headroom (17.7% versus 7.7% IIRC) which, tho they run pretty neck and neck "outta the box" the higher obtainable OC's give the 1060 the fps performance edge

- The 480 draws more power (about +80 watts) while gaming

- The extra power means more heat and more noise.

- You'll need a bigger PSU (+100 watts) than ya do for the 1060 and best to get an extra case fan to move it outta case, but as all cards are getting more efficient, there' reasonable likelihood that your current PSU is big enough for both

- The 480 is running about $20 - $25 cheaper of late

T{PU states that the MSI RX 480 is the only one that competes with the 1060, so if ya go that route, that's the one I'd look at 1st .. and avoid and reference designs or 480s w/ 6 pin power connectors

thus far the [MSI 480 Gaming X] only RX 480 that looks like it can compete with the GTX 1060 and its custom designs.







 

MWP0004

Respectable
Oct 26, 2016
491
0
1,960


It is worth mentioning that the Rx480 is available in an 8Gb VRAM variant. While this is overkill for 1080p in games now, in the future it may not be.
 

john TJ

Distinguished
Jul 12, 2014
840
0
19,360


just having more vram , does not future proof cards
 
No, not at 1080p... not even 1440p. All the steam generated over VRAM of late is based upon misconception and presumption. As with all the hoopla about the 970 and the 3.5 / 0.5 split, it was, as Shakespeare said "Much ado about nothing". As many web sites subsequently showed, you could create a RAM issue with the card, but you had to work really, rally hard to do it. It just didn't happen under normal game play. The 3 GB 1060 is just fine for 1080p .

In the old days, we could calculate needed VRAM with a formula that went something like 1920 x 1080 x 32 bit color depth / 8 ... and that answer was like MB .... yes with an M :). Nowadays with shaders, AAF and all the other fancy, smancy stuff it has increased manyfold.

The misconception comes from the fact that when peeps look at RAM usage in GPUz, they assume it's real; it's not. Alienbabeltech was the 1st to test this that I saw, they tested like 40 games up to 5760 x 1080 with 2GB and 4 GB 770.... the only games that had issues with 2GB were ones that couldn't maintain 30 fps at 5760 x 1080 with either the 4GB or 2GB making the RAM issue irrelevant. What happens is when you install the game, it looks at your hardware and it's as if he is saying, "This MWP guy has 4 GB so I want to steal as much as i can for**my use** so let's reserve 3 GB for ourselves" ... if It sees 8GB, it might grab 5 GB.

To make an analogy, lets say you have a Visa card with a $5000 limit and you buy a new phone for $500. Three months later, you apply for a car loan and they ask for a credit check. The reporting agency says that "MWP has a liability with VISA for $5,000" ... you don't owe anybody $5k, but you could charge another 44,500 w/o asking anybody. here's the technical explanation

http://www.extremetech.com/gaming/213069-is-4gb-of-vram-enough-amds-fury-x-faces-off-with-nvidias-gtx-980-ti-titan-x

GPU-Z claims to report how much VRAM the GPU actually uses, but there’s a significant caveat to this metric. GPU-Z doesn’t actually report how much VRAM the GPU is actually using — instead, it reports the amount of VRAM that a game has requested. We spoke to Nvidia’s Brandon Bell on this topic, who told us the following: “None of the GPU tools on the market report memory usage correctly, whether it’s GPU-Z, Afterburner, Precision, etc. They all report the amount of memory requested by the GPU, not the actual memory usage. Cards will larger memory will request more memory, but that doesn’t mean that they actually use it. They simply request it because the memory is available.”

The article concludes:

We began this article with a simple question: “Is 4GB of RAM enough for a high-end GPU?” The answer, after all, applies to more than just the Fury X — Nvidia’s GTX 970 and 980 both sell with 4GB of RAM, as do multiple AMD cards and the cheaper R9 Fury. Based our results, I would say that the answer is yes

While we do see some evidence of a 4GB barrier [at 4k] on AMD cards that the NV hardware does not experience, provoking this problem in current-generation titles required us to use [4k at] settings that rendered the games unplayable any current GPU. It’s reasonable to ask why we didn’t fine-tune results, attempting to measure the impact of just going over the 4GB threshold with the GTX 980 Ti or Titan X, and then test with those settings. Unfortunately, GPU-Z simply doesn’t measure accurately enough to make this possible.

Here's some more things to look at.

https://www.pugetsystems.com/labs/articles/Video-Card-Performance-2GB-vs-4GB-Memory-154/
http://www.guru3d.com/articles_pages/gigabyte_geforce_gtx_960_g1_gaming_4gb_review,12.html

Could it change 3 years from now... I don't see it, you only have so many pixels to move around no matter how much pre-processing is being done. We are seeing now that 3GB at 1080p is about 7% slower overall than 6GB, with that performance being pulled down by newly released DX12 games for which devs haven't had a helluva lotta time to optimize for. These games were developed in DX11 and then essentially ported to DX12. It's not unreasonable to expect that games developed from the getgo in DX12 will be more efficient. 6 GB is doing just fine in 4k right now and that's 4 times the pixels. I don't see a fourfold increase in game demands in the next 4 years.

In 2010 the recommended system RAM for gaming was 2 x 4GB.... and while some games are showing minor increases with 2 x 8GB it's 6 (soon to be 7) years later and it's not exactly crimping anyone's experience with 8 GB

 


The 1070 is ofc the better card but request by OP in this instance was for 1080p, 60 fps and < $260

A. A card that gives 690 fps at 1080p

B, A card under $260.

Out of the 16 games in techpoweup's test suite, the MSI 1060 GB (w/ OC obtained in the review) will deliver his stated goals @ 60 fps except for one game (Anno 2205) ... well Crysis 3 is 59.9 fps

 
With each generation of GFX cards, power requirements have dropped drastically.

1060 (120 watts) is comparable in performance to older cards which drew up near 300 watts. The 1060 calls for a 350 - 400 watt PSU. We've never installed anything less than a 620 in a gaming box, so don't see an issue there.

As for CPU, anything that is still running, is likely to be just fine... Sandy Bridge came out In January 2011 so in a month it will be 6 years old and SB is just fine as while GPUs increase generation to generation have been as much as 50%n of late ... CPU impacts have been barely measureable.

Unless the system was a ore bought Dell or $600 budget AMD build, I don't expect an issue. I assumed, perhaps wrongly, that if OP asked for card as opposed to a CPU upgrade that he knew where he sat.
 

Mammatus

Reputable
Apr 8, 2015
99
0
4,660
Personnally, when talking about "futurproof", i think that the 480 is a better choice... Heres why!!!

1. Today, 480 is a little bit cheaper than a 1060.

2. Yes, 1060 is a little bit faster in DX11 game, BUT, with DX12 and vulkan, which will be the "standard" in the upcoming game, 480 is a "on par" or a little bit faster than the 1060.

3. You can CF a 480, (cannot SLI a 1060). So, in the futur, if you want to upgrade your system, you have the option to just buy another 480 (which will be a lot cheaper) or to buy a completely different card as oppose to the 1060, where your only option will be to buy a new more expensive/powerful GPU.

My 2 cents, FWIW...! ;-)
 

king3pj

Distinguished


This may be true for most games but it certainly isn't true for all. VRAM is part of the reason I "upgraded" from SLI 970s to a single 1070. In games that had great SLI support and weren't VRAM hogs the SLI 970s were just as good, if not better than my single 1070. In games that don't use SLI or require a lot of VRAM the 1070 is a huge upgrade.

I'll use Forza 6 Apex as an example. It's not the only game where I saw this effect but it was one of the most noticeable. When I played on my 1440p monitor with SLI disabled (game doesn't support SLI) I would get massive stuttering and framerate drops even though my 970 was only at about 60% core usage and my i5-4690k was not being maxed out. The reason this was happening was because the game was using every bit of VRAM I had available. Switching to a 1070 completely fixed this problem just by having enough VRAM. The core usage was very low on both cards with VSYNC on but one stuttered while the other was completely smooth.

The point is that there are games where the amount of VRAM can make a difference even if it doesn't in most of them. Maybe if the 970 only had 3.5GB of VRAM total the game wouldn't have used the slower .5GB and stuttered. I have no way of knowing that. All I know is that the only thing maxed out in my system was my VRAM and that caused significant performance problems.

I would pay the extra money for the 6GB 1060 instead of going with the 3GB model. Even if it only makes a difference in a small portion of games it would be worth it to me.

 
A reasonable argument and well put forth ... however :)

1. The price is determined by what a) each company sees will sell well in the face of competition and b) supply and demand. As long as the 1060 continues to outsell the 480 and until nVidia can keep vendor in stock (which it has not been able to do of late), I don't see this changing.

If building new, if we gonna say cost is an issue, then it should be total cost .

a) the 480 needs a 100 watt bigger PSU (cards draw 200 vs 120 watts in "typical gaming") and that has a cost

b) to keep comparable case temps, the 480 will need an extra case fan (one 120mm per 50-75 watts)

c) the 480 will cost more to own.... I'll use "cheap power" here; where I am it's 2/4 times more... in Europe its 5 times more:
80 watts x 35 hours a week x 52.14 weeks x 3 years x $0.10 per kw hr / (100 watts per kwhr x 0.85 eff) = $51

2. Again, one card OCs 17.7%, the other 7.7%...that's not core speeds that's measured fps improvement. So if both cards do 50 fps outta the box, one will do 54 and the other will do 59 ... a 10% improvement. Should a card that's 10% faster be worth 10% more ? Usually as you move up in performance we see diminishing returns, to come out "even" is a big win.

As for Vulkan and DX12, ther may turn out to be something there but we are still in the infancy of both. With games having a 3 - 5 or even more year development cycle, today's DX12 games have been "adapted" for these new APIs and not developed from the get go with DX 12 and Vulkan in mind, As such, they are still maturing. But, again, we've seen this before. Anyone remember Mantle ? The mantra was "Mantle will change everything" but it became a historical footnote before it was really used by anyone. How about HBM ? AMD put out a lot of hoopla about HBM but HBM1 turned out again to be much ado about nothing. When was the last time we even heard about HBM2 ... last i heard was volume shipments in 3Q .. to who ? With GDDR5 and 5x, HBM2 isn't bringing anything tot he table, is in too short supply to be adopted and just isn't a viable option as yet.

http://www.tweaktown.com/articles/7830/hbm2-graphics-ram-hero-deserve/index.html

AMD was the first to have HBM1 technology on a consumer graphics card, with the release of the Radeon R9 Fury X reaching new levels of excitement - but, really - it fell on its face. Compared to NVIDIA's GeForce GTX 980 Ti which was released weeks earlier than the Fury X, AMD's graphics card lost against NVIDIA's best consumer graphics card at the time, even with the next-gen HBM1 technology.

But yes, AMD is showing better progress here but the Mantle experience leaves me an expectation of "Deja Vu all over again". AMD has often taken the early lead with new APis, but so far they have yet to maintain it for very long.

3. nVidia has a big issue with SLI. It's killing their bottom line. Too many peeps were buying two x70s, x 60s or even x50 Ti's and outperforming the X80s. They make a lot more money selling the x80 than the 2 of whatever. Scaling on the 1070 / 1080 is currently even worse than AMDs. But it would be a conflict of interest for nVidia to improve scaling on the 10xx series. With no competition right now to the 1070 / 1080 from AMD, the sales of only 1 card would be hurt by better scaling ... or adding scaling on the 1060 That would be the 1080 and they make a lot more money selling a single 1080 than to 60s or 70s.

Yes, you can CF a 480. AMD and nVidia back in the day allowed use of this technology on cards that weren't really up to it which is why we continue to see posts today that SLI / CF cause microstutter and a host of other problems. But scaling with the current 10xx and 4xx series is simply dismal. The whole idea behind SLI / CF is that in exchange for the increased power / heat from dual cards, you get more fps than the flagship card at less money:

Two 560 Tis ($400) gave you 40% more fps than the 580 ($500)
Two 650 Tis Boosts for $300 gave you more fps than the 680 ($580)
Two 970s gave you 40% more fps than the 980 for the same price.

So the idea is... SLI / CF can be easily justified if a) the cards are at a level of performance that they don't experience problems and b) the 2 cards must perform better and cost less than a single card alternative. That doesn't happen here with the 1070 or the 480.

At the OPs resolution, you are looking at 39% average scaling across TPUs 16 game test suite. For the 970 it was 70%

But the kicker is this ... relative to the 480:

- a 2nd 480 gets you 39% more fps for $500.
- a 1070 gets you 150%, more fps for $410

All combined, the relative value, using same Brand / Model line (MSI Gaming X in this instance) is

139% x 1.077 OC / $500 = 0.299
150% x 1.177 / $410 = 0.431

That gives the 1070 44% more fps per dollar than the twin 480s.

You've made very good points, ones that should be considered, but when ya dig a bit deeper and consider all factors beyond "out of the box" fps .... I think the 480 holds 'all things considered". However, if you don't OC ya GFX cards, Mom's paying for the electric bill, you already have 8 fans in ya case and an 850 watt PSU, then I really can't make an argument against either card.

Everything below the 480 / 1060 is AMD all the way ... everything above is nVidia all the way. The 480 / 1060 is a battleground in a price / performance niche that they haven't been very competitive in a long time. The 380 was a clear winner over the nVidia offerings in their price / performance range and this time around, you can make a case for both cards. Which way you go will depend on:

a) New Build or old build w/ excess PSU capability and more than adequate case cooling
b) Tolerance for noise
c) Where OCing or not
d) Local power costs




 


The 1070 is huge forward leap from it's GPU alone..and while 970s in SLI will top the 1070 occasionally, it wins most battles on GPU alone. F4 is a good example.

fallout4_2560_1440.png


The fact remains.... no one, at least that I have seen, has published anything showing a noticeable drop in fps at 1080p. No one has even done so at 1440p. Even at 4k, the only time anyone was able to do this was at 4K and even to do that, they had to use settings that crippled the game under 30 fps.

The VRAM thing has been going on for years ... with the 770s, alienbabeltch and pugetsound could not duplicate the problem ... Guru 3d could not produce the problem with the 9xx series and neither could extremetech. Now you could likely pick a poorly done console port or a game designed under DX11 and "ported" from DX11 to DX12 and produce issues.

Again, with all the ranting and raving about the 970, a dozen sites tried to substantiate it and simply could not 'without doing some really, really strange things. I may be suffering from "boy who cried wolf syndrome" so i just don't listen anymore but every time someone has taken the youtubers to task on the lab bench, these claims have never been substantiated. I don't think you'll find Forza in any game test suites or on any Top 20 lists and I don't know how it was developed so can't speak to hat issue.

And if I were you, yes I would get the 6GB cause you have a 1440p screen. However, the OP has a 1080p screen and 6 GB just is not going to be needed here.



 

king3pj

Distinguished


I agree that 6GB is more than what is necessary but I would still want more than 3GB even at 1080p. If Nvidia sold the 1060 in 4GB and 8GB variants like AMD does on the 480 I would recommend saving money and going with the 4GB model. When it's 3GB and 6GB I personally would spend the extra $45 (going off Amazon's current US pricing for the 3GB and 6GB Gigabyte Windforce 1060s).

To me 4GB seems like a good VRAM sweet spot for 1080p gaming in 2016. It puts you on par with some of the more popular GPUs from last generation like the 960, 970, 980, 380, Fury, and older 290. I can't imagine that too many developers are going to design their games in a way that hinder performance on all of those GPUs. I could very well see them targeting 4GB as the new baseline and inadvertently put 2GB and 3GB cards in a bad place though. That's why I personally would choose the 6GB 1060 over the 3GB model.
 
Solution

bigjohnny

Reputable
May 20, 2016
539
1
4,985


im currently building(my first build) ive already got some of the parts,i need to know if future games will only be supported on dx12 and not dx11,if so i think i will need to get the rx 480 cause ive seen reviews of the rx 480 outperforming the gtx 1060 on dx12 but i also want to play games like gta v ac1,2,3 crysis 3 etc.which are all dx11 supported(i think) and the gtx 1060 outperforms the rx 480 on dx11

i also have a old viewsonic 768p monitor should i snub the 1080p for the 768p to increase fps on current games and old

Build

gigabyte ga h97m d3h(already have)
intel i5 4590(already have)
seagate 1tb hard drive(already have)
antec earthwatts green 650w 80+bronze(already have)
kingston hyperx white fury 16gb(2x8gb)(getting soon)
cryorig m9i 48.4 cfm cooler(getting soon)
[/list]
[/list]
 

Jason_169

Reputable
Sep 11, 2016
79
0
4,640
If I were you I would get the 1060 I mean the price isnt that much different and besides with Nvidia you get all the Gameworks stuff like HBAO+ wich in my opnion the best AO method to date TXAA wich also in my opnion the best AA method to date PCSS well its good too and ofc PhsX i dont know wether or not there are AMD counterparts to these things though also 6GB is pretty much overkill for 1080p gaming its more like 1440p gaming but 8GB is waaay overkill for 1080p
 

bigjohnny

Reputable
May 20, 2016
539
1
4,985


what about this one https://www.amazon.com/dp/B01IA9FEOO/?tag=pcpapi-20
 

king3pj

Distinguished


That card should be fine. I've never used a Zotac card but other members here seem to like them. A 6GB 1060 for $240 is a good deal.
 

bigjohnny

Reputable
May 20, 2016
539
1
4,985


just asking should i get win 7 or 10 ive seen reviews about win 10 privacy problem is it worth it, also will playing a old dx11 game like battlefield 3 on win 10 that uses dx12 be problematic
 

king3pj

Distinguished
If you are building a gaming PC I think it makes more sense to use Windows 10. That is the only way to get DirectX 12. There are some titles, like the Xbox exclusives that have been coming to PC that require DirectX 12 to even play. The number of games that require DirectX 12 will only increase over the next few years.

I have been using Windows 10 since they first started offering it as a free upgrade. I still use Windows 7 every day on my PC at work so I am very familiar with both. I can say that I easily prefer Windows 10 and wish we had it at work.

Windows 10 just has some quality of life improvements that Windows 7 doesn't like showing the taskbar on both monitors (I have a dual monitor setup both at home and at work).

That being said either version of Windows is fine for a gaming PC if you don't care about DirectX 12 support.
 

bigjohnny

Reputable
May 20, 2016
539
1
4,985


so i can use dx11 on win 10?will gaming performance increase?also will i need to get the rx 480 instead of gtx 1060 if i use dx12 because (as ive said before)the rx 480 performs better on dx12 than the gtx 1060