R9 Fury X or GTX 980 Ti?

TreyJackson

Reputable
Dec 25, 2015
8
0
4,510
Hello all! I'm building a PC in a couple months and I have all my parts chosen except a video card. I'm going to be buying either an R9 Fury X or a GTX 980 Ti. I will not be doing 4K gaming and I plan on mostly playing GTA V, Witcher 3, and Just Cause 3. I don't really plan on overclocking either so I can't decide which card to go with. I'm only buying 1 card so I need help deciding between the two. Thanks!
 
Solution
Stavrosmast has a good point about game compatibility. The majority of recent games tend to favor Nvidia architecture, including the three you listed. With Nvidia holding 80% of the GPU market share, that situation's probably not going to change in the foreseeable future.

gtav_1920_1080.png

witcher3_1920_1080.png

JC3-1080p-Very-High.png

stavrosmast

Honorable
Not amd hater but
Why not to buy fury x:
-Not more than 1-2 amd gpu optimized games
-limited with 4GB
-Bad drivers
-Higher temps
-Very bad OC
-no shadow play
-Higher chances of bsod
-higher power consumption
-translates to bigger power bill
-you need better psu
-higher cpu temp caused by higher gpu temp


You want more?
 

xBlaz3kx

Reputable
Aug 18, 2014
310
0
4,860
I'd go with GTX 980, not necesarilly Ti. If you're looking for a silent build, MSI has the perfect thing for you. Doesn't use its fans untill you start gaming or it uses more power. And it will be more efficient, and have less power consumption because the frequency will drop at idle status.
 

There are plenty of AMD optimized games.

The 4GB limitation is real, but practically never an issue with current games. The improved delta color compression algorithms in the newest architectures from both AMD and Nvidia help you get more mileage out of the VRAM.

AMD doesn't really have bad drivers. At least as of Windows 10, Nvidia has worse drivers than AMD.

The Fury X GPU actually runs very cool due to the water cooling. Cooler than practically all 980 Ti's, except of course ones that are also water cooled.

It doesn't OC as well as the 980 Ti though, that's true.

AMD has their own version of Shadowplay.

Higher chance of BSOD? Sounds like BullS*** On Demand.

Higher power consumption, not really. 220W in a gaming workload. AMD improved their power efficiency with the GCN 1.2 architecture. Throw a stress test at it and the Fury X can draw a lot more power, sure, but that's not a normal workload.

Higher power bill, that only applies if you run Furmark all day every day.

Needing a better PSU, again not really.

A higher GPU temperature does not cause a higher CPU temperature. All that matters if how much heat the GPU exhausts inside the case, which is dependent on the power consumption, not the temperature of the GPU. But the Fury X is water cooled, meaning you can put the radiator in exhaust mode and expel all the heat straight out of the case. Better for CPU temperatures than any aftermarket cooled card.
 

xBlaz3kx

Reputable
Aug 18, 2014
310
0
4,860


Why would you need a water cooled GPU anyway, unless you want to OC it. I think the important part is that it works well and is silent and efficient. MSI's GTX 980 has Twin Fozr and it fans wont spin unless it consumes more power when gaming. It wont affect the heat of the CPU, and if you have a good CPU cooler and fans, you wont need that watercooling, will you?
 
Stavrosmast has a good point about game compatibility. The majority of recent games tend to favor Nvidia architecture, including the three you listed. With Nvidia holding 80% of the GPU market share, that situation's probably not going to change in the foreseeable future.

gtav_1920_1080.png

witcher3_1920_1080.png

JC3-1080p-Very-High.png
 
Solution

ROCKJAYDEN_22

Reputable
Aug 11, 2015
74
0
4,710
For me its a pretty close race But you have to ask yourself whether you wan to go with AMD or Nvidia do you wan to use shadow play or does it not bother you? Do you plan to SLI or crossfire because crossfire is alot less picky than SLI? these are thew things you have to ask yourself personally I could pass up the opportunity of shadowplay and therefore went Nvidia but whatever you pick you'll get pretty good performance!
 

TreyJackson

Reputable
Dec 25, 2015
8
0
4,510


Sorry for the late reply I had no power. I'm already leaning towards the 980 Ti (probably from ASUS) but one of my friends says to go with the Fury X because of HBM. Says it'll last longer once games start utilizing HBM
 


DDR3-2400 transfers data at 19.2 GB/s per channel (38.4 GB/s in dual channel), not 2 GB/s. And that's system memory, where you don't need much bandwidth. GPUs need much more memory bandwidth. Do you really think the GTX 980 Ti would have 336 GB/s memory bandwidth if it wasn't necessary, and it could just run some plain DDR3 or DDR4 memory?

The 512 GB/s on the Fury X is a real advantage, and it's the reason the 980 Ti is faster at lower resolutions, but the Fury X catches up at 1440p and pulls ahead at 4K (those are results at stock clocks, overclocking helps the 980 Ti out).
 


It's a bit uncertain how it'll go. The Fury X has an advantage in that the HBM is faster, but a disadvantage in that there's only 4GB of it.
 

stavrosmast

Honorable


That ddr3 2400mhz 38gb isnt real. I have ddr3 2400mhz and i made a part of the ram virtual HDD and ran hd tune. It showed 2gb/s. Why would a gpu need to transfer 512gb/s?. If you havent tried ddr3 2400mhz then dont tell us what you found online...
 


That's just a problem with your benchmarking procedure. HD Tune for a RAMdisk? It's designed for HDD benchmarking, it's not even any good when testing SSDs. Besides, you need a source and destination, and there's nothing else in the system that would be able to keep up with the RAMdisk. The 2 GB/s limit sounds like you may have been transferring across the chipset, where the DMI 2.0 bandwidth of pre-Skylake Intel platforms is about 2 GB/s.

You can see the per-module bandwidth of JEDEC standard DDR3 speeds here. It doesn't go any higher than DDR3-2133, but even that has a per-module (or per-channel) bandwidth of 17.1 GB/s, ie. 34.1 GB/s in dual channel.

A GPU would need to transfer 512 GB/s because it's doing massively parallel calculations on lots of data. 4K at 60 Hz in itself is already 500 million pixels per second, and then there's all the work that is done on each pixel.