Sign in with
Sign up | Sign in
Your question

AMD Radeon R9 285 Review: Tonga and GCN Update 3.0

Tags:
  • Graphics Cards
  • AMD
  • Gigabyte
  • Components
  • Graphics
  • Asus
  • GPUs
  • Radeon
Last response: in Reviews comments
Share
September 2, 2014 4:56:22 AM

On paper, the new Tonga-based R9 285 looks to be slightly slower than the R9 280 it is intended to replace, but there's more than meets the eye.

AMD Radeon R9 285 Review: Tonga and GCN Update 3.0 : Read more

More about : amd radeon 285 review tonga gcn update

September 2, 2014 5:34:03 AM

I wanted to see the GPU die and OCing results. :( 
m
10
l
September 2, 2014 5:35:00 AM

The idle power consumption numbers are odd, the previous generation cards use less then at idle didn't they? Not that 15 watts is going to break anyone's bank account but its strange nether the less.

Good to see AMD have tackled the noise and temperature issues that have plagued it's previous 28nm cards as well but it's a bit late in the day given that 20nm shouldn't be to far off now.
m
0
l
Related resources
September 2, 2014 5:35:39 AM

TL;DR Pay more to get the same performance in a more power efficient form.
m
6
l
September 2, 2014 5:35:54 AM

Really nice article guys. I'm impressed by how the 285 actually was able to keep up with the 280. And I'm shocked by the fact that The $250 Nvidia card loses to a $170 AMD card. Thank god I bought a GTX 770 :p 

Also, on the last page, you guys wrote R7 270X instead of R9, and in the chart it says "Relative to Radeon HD 7950 Boost". Oh, and in the Pros section, it says the 285 has R9 260 like performance?

[EDIT by Cleeve]
Thanks for the proofread, fixing it now! :) 
[/edit]
m
1
l
September 2, 2014 5:37:02 AM

wow ! at 250$ it actually is a better card even than 280X !! and it was meant for 760....but as it shows here even a 270X is a WAY better card than 760....
m
-13
l
September 2, 2014 5:38:22 AM

Had the tonga 285 come with a 6GHz/7Ghz GDDR5 & 4GB VRAM, the result will be a lot different. Whats with AMD putting on a 5500 memory? facepalm.jpg
m
-2
l
September 2, 2014 5:41:24 AM

While this is really a third GCN iteration, showing it as a version number of 3.0 (as in: "Tonga and GCN Update 3.0") makes no sense for me.
m
0
l
a b U Graphics card
September 2, 2014 5:41:34 AM

some one write this with a .45 acp on the head. I see some error on numbers models etc...
I prefer get a r9 280 and downclock get same results. I can't see the point of this heat on graphics. maybe drivers. OR THIS IS HAWAII XT! Too much Heat!
m
-4
l
a b U Graphics card
September 2, 2014 5:44:34 AM

Quote:
I wanted to see the GPU die and OCing results. :( 

I think the guys see if they hit the OC the room Will burn! maybe a problem with drivers.
Last time i see that Heat 290x tests. lol!
m
-2
l
September 2, 2014 6:02:51 AM

On the first page, it says "Improvements are always welcome but with the memory interface cut in half compared to the Radeon R9 280,...".

But in fact, the memory interface was cut by a third (384 bit -> 256 bit), not half.

[Edit by Cleeve]
Good point, fixed! Thx.
[/edit]
m
7
l
a b U Graphics card
September 2, 2014 6:10:34 AM

You guys might want to update the first chart of this review; the one comparing the specifications of the 280, 285, and the 280X. The 280X is a Tahiti chip not Tonga.

[Edit by Cleeve]
Good catch, fixed but might take a while to populate. :) 
[/Edit]
m
2
l
a b U Graphics card
a b Ĉ ASUS
September 2, 2014 6:16:42 AM

tomfreak said:
Had the tonga 285 come with a 6GHz/7Ghz GDDR5 & 4GB VRAM, the result will be a lot different.

Faster memory would have helped but more would not have made much of a difference: most of the extra memory on GPUs with more memory channels gets filled with extra copies of resources to improve availability. Without those extra channels, filling more RAM with extra copies would make little difference.
m
0
l
September 2, 2014 6:30:39 AM

I'd like to see this lossless color compression in 4k gaming cards
m
0
l
a c 168 U Graphics card
a b À AMD
a b Ĉ ASUS
September 2, 2014 6:50:16 AM

That's really dumb numbering...

The R7 265 is faster than the R7 260X, yet the R9 285 is slower than the R9 280X?
m
5
l
September 2, 2014 7:42:14 AM

probably AMD's hand was forced due to gsynch, so they had to quickly phase out all non freesynch cards before dec..might expect a r9 285x by end oct
m
0
l
a c 80 U Graphics card
a c 171 À AMD
a b Ĉ ASUS
September 2, 2014 7:42:48 AM

Someone Somewhere said:
That's really dumb numbering...

The R7 265 is faster than the R7 260X, yet the R9 285 is slower than the R9 280X?


Yea this should have been named 275 or 275x.
m
5
l
a c 168 U Graphics card
a b À AMD
a b Ĉ ASUS
September 2, 2014 7:49:14 AM

No, because that would imply that it's slower than the 280.

The 280X probably should have been the 285, and this card should have been released as the 280X. Or it could be next-gen; call it the 380 or 375.
m
1
l
a c 80 U Graphics card
a c 171 À AMD
a b Ĉ ASUS
September 2, 2014 7:55:22 AM

It kinda is slower than the 280. It trades blows with it, but still is not equal. I would say 275x would be fitting as the 280/280x are 384bit/3gb cards, where as the 270/270x are 256bit/2gb cards.
m
5
l
a b U Graphics card
a b Ĉ ASUS
September 2, 2014 8:13:15 AM

logainofhades said:
It kinda is slower than the 280. It trades blows with it, but still is not equal. I would say 275x would be fitting as the 280/280x are 384bit/3gb cards, where as the 270/270x are 256bit/2gb cards.

The 270/280 are just rehashes of HD7xxx designs while the 285 is a cut-down 290... and the 285 does beat the 280 enough times to earn its place in the 28x range.

Give the 285 a 6GT/s memory interface and it would slot in more solidly between the 280 and 280X.
m
0
l
September 2, 2014 8:26:55 AM

Quote:
That's really dumb numbering...

The R7 265 is faster than the R7 260X, yet the R9 285 is slower than the R9 280X?


Indeed, naming schemes are always kind of bogus.

260< 260X < 265

280<=285< 280X

That's just the way it is.
m
4
l
September 2, 2014 8:32:53 AM

Could you make a graph to show other cards power consumption? I find it hard to relate to without them.
m
6
l
September 2, 2014 9:00:14 AM

probably AMD's hand was forced due to gsynch, so they had to quickly phase out all non freesynch cards before dec..might expect a r9 285x by end oct
m
-1
l
a b U Graphics card
September 2, 2014 9:17:42 AM

With the differences in memory interface and bandwidth I would really have liked to see some resolution and AA scaling tests. It would be nice to see how the different memory speeds and bandwidth change the performance of each card.
m
1
l
a b U Graphics card
September 2, 2014 9:29:55 AM

Overclocking?
m
0
l
September 2, 2014 12:15:34 PM

A few things to take into account when reading this review. First of all, the r9 285 that is tested is clocked at 918 MHz not 954 MHz (read the small print). Secondly, the power consumption numbers measure the power consumption of the r9 285 clocked at 973 MHz instead of the 918 MHz card that is used to review gaming performance. Thirdly, there are no OCing results.

Some additional tidbits about my third point. If rumors are true, the GTX 970 will be running at around 1200 MHz. Also, the GTX 770 is clocked at a little over 1100 MHz when playing games. Maxwell maxes at about 1400 MHz, Kepler maxes at about 1325 MHz, and GCN maxes at about 1250 MHz. It seems that the r9 285 has more overclocking headroom than the GTX 770 and the soon to be released r9 285x will have more overclocking headroom than the soon to be released Maxwell cards. Also, note that the r9 285 is priced to compete with the GTX 760 even though a r9 285 with a max OC should be closer to a gtx 770 with a max OC than a gtx 760 with a max OC.
m
0
l
September 2, 2014 12:24:40 PM

Good architecture upgrade, but needs production node upgrade to really make difference. Interesting to see that both Nvidia and AMD are going to effiency direction with new models.
Hopefully this is cheaper to produce than 280 and improve production profits. We needs these companions to float...
m
1
l
September 2, 2014 12:44:22 PM

I think the naming scheme is fine, it just doesn't fit to how you all want it. GCN is also correctly described as GCN 1.2 in other reviews. Power figures are all over the place, Great Card
m
0
l
a c 92 U Graphics card
a b À AMD
September 2, 2014 1:11:25 PM

I assume the 285x will come with the 7ghz DDR5, they probably gimped the 285 on purpose.
m
0
l
September 2, 2014 1:17:29 PM

hannibal said:
Good architecture upgrade, but needs production node upgrade to really make difference. Interesting to see that both Nvidia and AMD are going to effiency direction with new models.
Hopefully this is cheaper to produce than 280 and improve production profits. We needs these companions to float...


+1
My hopes as well. we need amd strong.
m
0
l
September 2, 2014 2:43:52 PM

AMD will do fine even without these cards since they got the consoles. This new card is an attempt to make more profit out of each card sold by reducing the cost to manufacture while increasing what it costs us, the consumer. The new card is slightly better than the 280, and doesn't improve that much on efficiency, only about 10% according to techpowerup for instance. The R9 280 is selling for about 200$-220$ which is also not that impressive considering the same card under it's older name (HD7950) sold for less than that a year ago! So we're paying a lot more than what we paid a year ago, getting a tiny bit more in terms of performance and efficiency and this is called progress?
m
0
l
a b U Graphics card
a b Ĉ ASUS
September 2, 2014 3:02:17 PM

hannibal said:
Interesting to see that both Nvidia and AMD are going to effiency direction with new models.

They do not really have any other choice: GPUs with wide memory controllers cost more to make so they have to make their GPUs more memory-efficient if they want to improve their performance-per-buck metric. This usually comes with improved performance-per-watt too.

If they fail to reduce their memory bandwidth dependence, GPUs' performance and cost will end up dictated entirely by memory interface bandwidth since the shaders will become starved for data an increasingly large proportion of the time.
m
0
l
September 2, 2014 6:31:28 PM

So, buy a 280x? Power consumption isnt really an issue for gaming consumers.
m
0
l
September 2, 2014 7:17:27 PM

I would have also liked if you included results from the 7800 series and 7900 series cards. as they arent much faster then those! it would put the lack of performance of AMD as a company into perspective. still using old CPU's and respun rebranded video cards from 4 years ago, with barely 5% speed improvements
m
1
l
a b U Graphics card
September 2, 2014 9:14:03 PM

It shocks me how the GTX 760 is stacking below the R9 270x as i brought it thanks your recommendations of being the best 250$ card in the moment, I dont mind if AMD has a better product now, which is great, but that a 170$ card is the same or faster than your recommendation is senseless. Well, I think that drivers and brand optimizations pays off the people aiming AMD cards. Good article btw.
m
0
l
September 2, 2014 9:44:11 PM

I am guessing this card should have 4GB variant (double RAM size) like other AMD/nvidia cards. Will you think a 4GB model help a lot specially in Mantle??
m
0
l
September 3, 2014 12:45:51 AM

R9 280A would've made for a more sensible name.
m
1
l
September 3, 2014 1:18:41 AM

i'm pretty sure in the 30 year live stream thingy they said something about 4GB versions being a thing

although that could be the unannounced R9 285X
m
0
l
September 3, 2014 2:40:54 AM

is that a bug or what in GPU-Z image?

Asus: Bus Interface: PCI-E 3.0 x16 @ x16 1.1 (stuck at PCI-E 1.1 x16 mode?)
Gigabyte: Bus Interface: PCI-E 3.0 x16 @ x1 1.1 (stuck at PCI-E 1.1 x1 mode?)
m
0
l
a b U Graphics card
a b Ĉ ASUS
September 3, 2014 3:15:51 AM

Avus said:
I am guessing this card should have 4GB variant (double RAM size) like other AMD/nvidia cards. Will you think a 4GB model help a lot specially in Mantle??

Probably nowhere near as much as people might expect: most of the extra RAM on 384/512-bits boards simply holds extra copies of data already on other channels to make it more available so all channels get more even loading.

Without the extra channels, the GPU has no need for those extra copies and associated RAM usage. This is why many games' GPU memory usage scale with channel count... for identical settings, people with 2GB dual/triple-channel GPUs may see ~1.6GB usage while people with triple-channel GPUs might see 2.2-2.4GB and quad-channel GPUs may show ~3GB usage... most usage is simply same 700-900MB artwork payload getting replicated across two, three or four channels for bandwidth multiplication.
m
0
l
September 3, 2014 8:10:03 AM

Quote:
GCN is also correctly described as GCN 1.2 in other reviews.


Not really.
AMD has never officially called Hawaii's GCN implementation 1.1, nor has it named Tonga's 1.2. AMD has avoided distinguishing nomenclature entirely between iterations, there is no official designation.

These are terms invented by the press, just like GCN update 3.0. :) 
m
1
l
a b U Graphics card
September 3, 2014 10:21:12 AM

has it been upgraded with hdmi 2.0??
m
0
l
September 3, 2014 10:22:30 AM

saturn85 said:
is that a bug or what in GPU-Z image?

Asus: Bus Interface: PCI-E 3.0 x16 @ x16 1.1 (stuck at PCI-E 1.1 x16 mode?)
Gigabyte: Bus Interface: PCI-E 3.0 x16 @ x1 1.1 (stuck at PCI-E 1.1 x1 mode?)



This is normal. Without load GPU-Z shows always only @1. :) 
m
0
l
September 3, 2014 11:09:56 AM

That is GCN 1.2
m
0
l
a c 80 U Graphics card
a c 171 À AMD
a b Ĉ ASUS
September 3, 2014 11:26:52 AM

Looks like my HD 7970 purchase was still one of the best decisions I have ever made for my system. :lol: 
m
1
l
September 3, 2014 12:10:57 PM

cleeve said:
Quote:
GCN is also correctly described as GCN 1.2 in other reviews.


Not really.
AMD has never officially called Hawaii's GCN implementation 1.1, nor has it named Tonga's 1.2. AMD has avoided distinguishing nomenclature entirely between iterations, there is no official designation.

These are terms invented by the press, just like GCN update 3.0. :) 



Yes I understand that from other reviews but they will be using GCN 2.0 when that design actually arrives. So the use by other publications of GCN 1.1, 1.2 makes much more sense and is less confusing for people as times moves forward when AMD actually does release 2.0 and 3.0.

m
0
l
September 3, 2014 4:28:42 PM

So they going to charge the same price as the 280X for a card that is only slightly better than my R9 270, Way to undercut and rip off your customers AMD! Now your cards will match your bad artifact causing drivers.
m
1
l
a c 80 U Graphics card
a c 171 À AMD
a b Ĉ ASUS
September 3, 2014 4:38:50 PM

Brian Blair said:
So they going to charge the same price as the 280X for a card that is only slightly better than my R9 270, Way to undercut and rip off your customers AMD! Now your cards will match your bad artifact causing drivers.


Considering the price of some of the 280x out there right now, I do agree that $250 is a bit high. I'd rather spend the extra $25 and get a Sapphire 280x. Since it is replacing the R9 280, it should be priced about the same. You can get those around $225.
m
2
l
September 3, 2014 7:22:47 PM

Well hell. I just bought the kid an R9 270X 2 days ago.. Should I keep it or exchange for 285? Same price (where I bought it)
m
0
l
a b U Graphics card
September 3, 2014 11:01:09 PM

Quote:
AMD will do fine even without these cards since they got the consoles.


ROFL. Check their financial reports since consoles hit and let me know how that's helping. Have you seen the console sales numbers? Do you realize they're only getting 10-15% margins right now? Even at 20% at some point the earnings from this can barely cover the $180mil in interest on their debt each year not to mention whatever they'll owe GF again most likely due to yet another fine for take or pay type crap (I hope that is over, but we'll see). Between the two consoles they've only sold ~16mil (10mil to sony maybe 6mil to MSFT). That isn't enough to say they will support AMD, and SOCS from all vendors are moving in on their performance and will do so yearly. The price of games on android/ios/steam etc vs. $60+ games on consoles is already a VERY tough sell and will continue to get worse.

My guess is at 20nm or 14/16nm you'll see a two chip android console (as in 2x M1 or whatever NV names K1 successor) with 100-150w psu that catches them and makes them a moot point. Or some arrangement like this, such as a soc with just the cpu portions paired with an NV discrete gpu. The biggest mistake consoles made was going so low that a tablet soc could catch them very quickly (less than 1/2 into their lifecycles easily, we're not even a year in now, 20nm around the block). Google or Apple could put out a special soc with extra gpu units (SMX's for NV etc) just for this purpose. Tablets with K1 (and it's 20nm future versions+ all competitors) will already be darn good at running games on your TV via HDMI/Miracast etc and again with far cheaper games. You won't get the same sales this gen for console vs. last gen with all the CHEAP game choices people have during the 2nd half of consoles life cycles. Usually that is when the casual gamers pick up the slack, but that won't happen this time. They'll have far more options to choose from (steambox's also) and they get better yearly for everyone EXCEPT consoles.

AMD needs to start making money from their core products (cpu/gpu/apu) beyond the consoles or get nowhere fast.
m
-1
l
      • 1 / 2
      • 2
      • Newest
!