Will my power supply support 2 RADEON R9 290s (NOT 290X)

Supervhizor

Honorable
Aug 5, 2013
7
0
10,510
I have a Corsair TX750m power supply that I currently run 2 GTX660s, and an overclocked i5 3570k (4.2ghz) on. I am planning to upgrade my graphics setup to 2 RADEON R9 290s.

I'm wondering if 750 watts is enough power for those cards plus my OCed cpu. Also, will I have the headroom to overclock the new graphics cards? If you need more information on my system just tell me. Thanks!
 

InvalidError

Titan
Moderator
The 290s can peak at close to 300W each (running FurMark - closer to 200W during actual games) so you would be looking at close to 700W worst-case peak power. It is highly unlikely that everything will peak at the same time though so as long as you do not run FurMark or similar, 750W should be enough.
 

aznricepuff

Honorable
Oct 17, 2013
677
0
11,360
According to Tom's review of the 290, the card can reach up to a maximum of 220W power draw under load. Of course the card was not overclocked for their tests. Based on that, I would say your PSU could definitely handle two reference, non-OC 290s. You would not have too much headroom for overclocking though.
 

2x4b

Honorable
Oct 28, 2013
775
1
11,360
I looked up the Sapphire card, it wants a 750w PSU for 1 card. Gigabyte wants a 600w PSU.

TX750M can supply 62Amps on the +12V rail. (source Newegg) But I can't seem to find how much current either of these cards draw.

Neither manufacturer mentions how much you would need for two cards.

I would be afraid to put much less than 1000w at a dual card setup.
But heat may be your worst enemy.
 

cuecuemore

Distinguished

Why are you commenting when you admit you have no clue and your recommendation is based on an irrational fear?
 

Supervhizor

Honorable
Aug 5, 2013
7
0
10,510
Thanks for all the help! I definitely will consider a new power supply, as I will be using things like Furmark and Heaven as well as games to test OC stability. Now, $159.99 is a smidge too much for me as I will have around $950 for this project and the cards are coming in at $863 plus shipping after tax. Unless something goes on Christmas or Black Friday sale, would this be a sufficient alternative to the PSU suggested by Blackbird? http://m.newegg.com/Product/index?itemnumber=N82E16817182188
Ive heard good and bad things about Rosewill PSUs and it has 4 eggs on Newegg, so if this is a bad buy, will an 850 watt corsair PSU be enough? Like http://m.newegg.com/Product/index?itemnumber=N82E16817139022
 
For a system using two Radeon R9 290 graphics cards in 2-way CrossFireX mode a minimum of a 850 Watt or greater system power supply is recommended. The power supply should also have a maximum combined +12 Volt continuous current rating of 65 Amps or greater and have at least two 6-pin and two 8-pin PCI Express supplementary power connectors.

Total Power Supply Wattage is NOT the crucial factor in power supply selection!!! Sufficient Total Combined Continuous Power/Current Available on the +12V Rail(s) rated at 45°C - 50°C ambient temperature, is the most critical factor.

Overclocking of the CPU and/or GPU(s) may require an additional increase to the maximum combined +12 Volt continuous current ratings, recommended above, to meet the increase in power required for the overclock. The additional amount required will depend on the magnitude of the overclock being attempted.

To handle the running of FurMark with overclocked GPUs you should be looking at a PSU with at least a maximum combined +12 Volt continuous current rating of 68 Amps or greater.
 

scoobydenon

Distinguished
Feb 27, 2011
492
0
18,960


Yes a Corsair 850 should be ok.
 
GPGPU usage, like Bitcoin mining, can increase power draw by 37 Watts more than the peak power draw reached during gaming.

If your PSU is able to handle the GPU running FurMark then you shouldn't ever encounter a problem due to insufficient power.
 

InvalidError

Titan
Moderator

The main reason is because Furmark is a purely artificial workload specifically designed to push GPU power draw well beyond what any real-world application would and produces no useful performance data.

In the past, Nvidia and ATI/AMD cards were not designed for artificial stress-testing and things like Furmark ended up killing them with AMD/Nvidia issuing statements about how their GPUs are not designed for such uses.

On the next GPU generation after that, they both started to implement throttling so people would no longer be able to push GPUs to their breaking point only by using synthetic benchmarks.

The only use for it is as a simple burn-in test - see if the system can handle the abuse.
 
I'm sorry but, what? Are you saying that Furmark is somehow using the graphics card in some way other software can not? If a graphics card can't be run on full load, whether that load be generated by software being called a synthetic benchmark, a game, or some other software, it's not stable.

You might as well tell people they shouldn't run Prime95 for fear of burning out the VRM section of their motherboard.

This sounds like advice which is designed to avoid the reality of poor manufacturing quality.
 

InvalidError

Titan
Moderator

If you look at the 290X benchmarks, it maxes out around 230W in the most GPU-intensive games available. FurMark pushes it to 300W.

No practical software would be optimized to waste as many resources as possible the way FurMark is.

So, instead of designing GPUs to take things like FurMark that have no practical real-world use head-on, GPU designers decided to do thermal and power management instead.


Well, you do realize that both AMD and Intel CPUs have thermal throttling too, right? Push them too hard and they start cutting back. They have had that feature for about 12 years now... Intel was the first to integrate this on-die with the P4 nearly 14 years ago.

Managing TDP is more reliable and flexible than ensuring the HSF will meet worst-case scenarios under all circumstances and these days, everyone is doing exactly that.
 
You're kidding, right? How is it pushing a device too hard to run a piece of software on it? I would understand your statement more if you said something like, "push it too hard for a given cooling solution," but you make it sound as though neither CPU nor GPU is able to handle a 100% load.

You also put forth an assumption that no other software could be optimized enough to put the same level of burden on a GPU as does Furmark. So are we to hope, based on this, games remain inefficient, as well as GPU transcoding, or other tasks we might throw at the graphics card?

When was the last time you accepted your CPU throttling? Just because it can do it, doesn't mean you let it. Properly cooled, your GPU and CPU should both handle the load they are sold to handle. My point about Prime95 was not thermal throttling, which I understand perfectly fine. It was about cheap VRMs that, are billed as being able to handle a load that clearly they can not.
 

InvalidError

Titan
Moderator

If you look at GPU load running FurMark and games, both push the GPU load to 100% as far as the GPU drivers are concerned except FurMark is optimized to be wasteful in the most power-hungry way while games and applications are optimized to be computationally efficient or at the very least not purposefully wasteful.

This is a bit like if you had a handful of CPU instructions that were substantially more power-hungry than the rest and then someone wrote a burn-in program that used those instructions exclusively even if it does not make any sense because no practical software can be written using those instructions alone... sooner or later, a real-world application would need memory loads/stores, conditional jumps, basic arithmetic, inter-process communication, API calls, etc.

My point: the code in burn-in software does not need to make any practical sense.

As for accepting throttling on my CPU, I'm running an i5-3470 with the stock HSF and Turbo Boost is settling at 3.4-3.5GHz under load, 100-200MHz short of its maximum Boost clock. So, if you count not hitting maximum Boost speed as throttling (I do), my answer would be: right now / always.
 

How can I take you seriously here? An i5-3470 has a target frequency of 3.2 GHz. For the purposes of clarity, I'm going to be specific and say I consider there to be at least two types of throttling, negative and positive. You are not experiencing negative throttling, which would be a reduction in core speed from it's target, at 100% load, due to adverse conditions. You are experiencing correct functioning of positive throttling. Boost speeds are more or less a nice bonus, but not guaranteed to be achieved as they require the chip to be running under it's max TDP and any other little tidbits the designers cared to take into account.

You just said that under load, you're only getting 3.4 - 3.5 GHz? That sounds correct, depending on what you mean by "load." 3.4 GHz is the highest 4 cores are supposed to clock on that chip, 3.5 GHz with 3 cores, with 3.6 GHz only attainable with 1 - 2 cores active.
 

Burn-in software, coded to be burn-in software, fulfils it's own purpose, and therefore makes perfectly practical sense, but maybe that's just me!

You do realize that to a computer, nothing makes practical sense? Why should we be concerned with attaching rationality to computer code? All a computer does is handle bits. All software applies a load to the silicon in one way or another. There really is no such thing as synthetic, from the computer's perspective, as all software presents itself as a task to be completed. It should not matter what set of instructions you pass to a CPU or a GPU, or any other chip. There's no magic, self-destructing set of super efficient instructions, or malicious code writers would have exploited them years ago.

The reason I brought up Prime95 is that it is a perfect example of a stress test that efficiently loads a CPU, and can place impressive loads on the memory subsystem as well, and has been known to lead to hardware damage, provided the hardware is unable to handle the loading. At the same time, it is a legitimate tool when used for the purpose of finding prime numbers.

The inability of the hardware to handle the loading concerns the power delivery and cooling, not the design limits of the CPU. The CPU is designed to run at it's target frequency, within it's temperature window, pretty happily, and for the duration of it's useful life. If the power circuitry feeding the CPU or GPU or any other chip is insufficient, then somebody cut corners somewhere, either in pairing the load to the circuitry in the final build, or flat out lied, such as what happens with cheap, Chinese firecracker power supplies that are rated for loads they clearly can't handle.

If people are going to buy video cards that die due to Furmark's loading, and that's why you recommend against Furmark, I strongly suggest you change your strategy to begin recommending graphics cards that are built to actually handle the potential loading they may see instead, rather than trying to mollycoddle bad manufacturing. There is a reason for the large market of add-in boards to pick from, and a reason why some cost more than others. Lowest price is not always best.
 

InvalidError

Titan
Moderator

CPU, GPU, SoC, etc. engineers disagree, seeing how they chose to implement thermal management/throttling instead of designing their reference cooler for worst-case scenarios to avoid worrying about that among other things.
 
A reference is just that, something that designers and manufacturers are free to deviate from, and nobody designs a chip that will fail at 100% load. Thermal management can be implemented for more than one reason, and when I run such synthetics as Prime95 and Furmark, neither my CPU nor GPU throttle themselves due to thermals, at which point I have to conclude that, the engineers you are referring to, didn't build my equipment, or else went against their principles.

I think we just have to agree that you and I are going to disagree on this point.
 

InvalidError

Titan
Moderator

But regardless of how they deviate from it, the feature is still there and will prevent the chip from exceeding what its attached thermal solution can handle so things like FurMark should no longer be able to kill them under normal circumstances the way they did many years ago before GPU manufacturers started doing that.
 

Then why does anybody care if other people choose to run Furmark? :) We've finally come full circle to my original question, which was more toward cuecuemore, but you answered instead. If Furmark can't kill a card, as everybody is thermal throttling, why his adamant command to never run Furmark? What's his dog in the fight? Has he a personal vendetta against Furmark or something?