GIGABYTE GeForce GTX 960 vs. GIGABYTE Radeon R9 380 G1

ra_4

Reputable
Dec 28, 2015
3
0
4,510
GIGABYTE Radeon R9 380 G1 990MHZ 4GB 5.7GHZ GDDR5 2XDVI HDMI DisplayPort PCI-E Video Card

vs

GIGABYTE GeForce GTX 960 1190 MHz / 1253 MHz 4GB OC GDDR5 2XDVI/HDMI/DP OC Video Card

Which will deliver more gaming performance. Same price, one has higher clockspeeds but obviously thats not all that matters.
 

ra_4

Reputable
Dec 28, 2015
3
0
4,510


there's a lot of propaganda on the internet that the 380 will blow up, and overheat, is this true?
 


No it isn't. GCN 1.2 is just a tweak of the cards back end to improve power consumption, the actual core of the GPU is unchanged.
 


The slightly higher fps cannot be seen by the human eye (so they are pointless) but the lower dips can be perceived as that results in "jerky" gameplay. Put that together with the extra 70w required to run the card and it's old architecture and the 380 doesn't look so good IMO.
 


It's all nonsense. It's a tactic by nVidia fanboys to promote the GTX 960 since it loses in pretty much everything except power consumption.

From Toms Hardware best picks for graphics cards itself;

AMD again finds utility in aging silicon, this time from the Tonga GPU, formerly powering Radeon R9 285. The company recently rebranded it to Antigua for the Radeon R9 380. If you knew the former, the latter looks mighty familiar: 1792 shader cores, 112 texture units and 32 ROPs. The 285 utilized a 918MHz core clock and GDDR5 memory at 1375MHz. Radeon R9 380 enjoys a speed-up to 970MHz with memory at 1425MHz on a similar 256-bit bus.

That increase is enough to put the 380 ahead of Nvidia’s GeForce GTX 960 in just about every one of our benchmarks at 1920x1080 and 2560x1440.


Says enough.
 


:lol: I can't say that I've seen any of those posts but no it's not true, if it was I would have seen such a thing quite some time ago as I run a 380 for folding 24/7 and have been doing so for over four months now.
 

Roti-Kebab

Reputable
May 12, 2015
490
0
5,160
I'm seeing a lot of bias in this post, forgive me if it's not, both 960 and 380 are highly competitive, but what i prefer in the 960 over the 380 is the huge amount of over clocking head room, you can achieve the same if not greater performance on your 960 if you know your overclocks, but, if you dont feel safe messing around with these things (even though it's super easy) i would suggest you go with the 380. But in the end, no matter what you read, nvidia still has the better drivers and better optimizations, and no, i'm not biased, i just have hard facts. But if you do end up going for the 960, i'd go for gigabyte's G1 gaming or MSI's gaming 2G, they're some of the best overclockers i have seen. Me personally having the msi one with 1567 on the core and 4011 on the memory :D
 

Care to explain the improved tessellation performance with just a back end tweak?

Yeah right... Google "GTX 960 driver problems" and "R9 380 driver problems" and then come back.
 


Driver improvements? Memory bus change? You tell me as you think you know how an incremental change somehow translates into a different architecture.
 

Roti-Kebab

Reputable
May 12, 2015
490
0
5,160

Please, do you even driver bro? AMD only just stepped up their game with crimson, but we know very well, they are still struggling with drivers in general not just the 380. And yes, i have around 15+ people in my contacts who run a 380 and like 6 of em have driver issues, i'll admit, so does the 960, but not as much, And we all agree nvidia does have much more optimized games out there
 


Not to mention the issue of Crimson switching fans off and causing cards to overheat and die, let's just gloss over that one eh? :whistle:
 
Driver improvements did not increase the tessellation performance of GCN 1.0 and GCN 1.1 to the level of the GCN 1.2 cards.
Memory bus was indeed changed. It got its bandwidth reduced.

You fail. Incremental changes do mean a different architecture. By your logic, Maxwell and Maxwell 2 are the exact same architecture, but they aren't. Still funny that this oh so old architecture can do concurrent Async compute while nVidia's most modern oh so new architecture can't. Even funnier is that this oh so old architecture is DX12 compatible with FL_12. Newer does not always equal better.

This is a long story that doesn't need to be explained here... Short version... Yeah they have issues with drivers under DX11 due to their GPUs having the need for parallel feeding, while DX11 only allows serial feeding. nVidia's GPUs are designed for serial feeding. All this is eliminated under DX12 and Vulkan, which is why AMD cards received the biggest boost, having an R9 290x rivaling a GTX 980 in performance...

And yeah, the reason they have more 'optimized' games out there is because of their dirty GameWorks campaign, which anyone who cares about games should be strongly opposed to.

But enough off-topic talk. We're not helping the thread starter this way.

-------------------------------------------------------------------------------------------

Edit towards post below, to not waste more posts on off-topic talk;

http://www.extremetech.com/extreme/213519-asynchronous-shading-amd-nvidia-and-dx12-what-we-know-so-far
tldr;
AMD async = vastly more powerful + pure hardware,
nVidia async = 'light' only + requires CPU resources, aka software solution.
 


The DOT xx indicates an incremental change and 2.0 is an architectural change, google it mate. And Nvidia cards can do Async Compute, it just has to be coded correctly for them, again google it.