Sign in with
Sign up | Sign in
Your question

Radeon HD 4000 series (RV770) specifications surfaces

Last response: in Graphics & Displays
Share
a c 79 U Graphics card
February 15, 2008 6:17:15 PM

Didn't see this posted elsewhere so here goes

http://www.nordichardware.com/news,7356.html

excerption
Quote:

Radeon HD Shaders Core(s) VRAM GFLOPS Frequency (MHz)
4870X2 2x480 2xRV770 1GB GDDR5 2016 1050/1800
4870 480 RV770 1GB GDDR5 1008 1050/2200
4850 480 RV770 512MB GDDR5 816 850/1800
4670 240 RV740 512MB GDDR4 480 1000/1200
4650 240 RV740 256MB GDDR4 384 800/1000
4470 40 RV710 256MB GDDR3 - 900/800
4450 40 RV710 128MB GDDR2 - 700/500

--
RV770 will break 1TFLOPS, while the R700, with two RV770, should be capable of twice that, more than 2TFLOPS. This is higher than what he had expected, but then again the core frequencies are also higher than expected. All cores are made by TSMC and their 55nm process. We're still trying to hunt down more reliable information on the design of the "core".
--
The idle power consumptions of all cards are extremely low and even the dual-core Radeon HD 4870X2 will consume less than 25W when idle.
--
The number of TMUs have finally been increased, actually doubled, to 32 with RV770, while RV740 sports 24 TMUs, and the low-end RV710 has 8 TMUs
--
To end this long and speculative post, we would like to say that if not all, most of the variables here are subject to change and should be considered with a "tiny" pinch of salt. We're still 4 month away from the launch.


Looking good so far :ange: 
February 15, 2008 6:51:30 PM

GDDR5??????? man! when will nvidia ever deside to use GDDR4?
February 15, 2008 6:52:44 PM

If these specs are true nvidia are going to be toast..


must be something wrong or a downside somewhere...


Dont seem to be anything out of the ordinary though. Just because nvidia didn't use gddr4 doesn't mean they cant just jump straight to gddr5.
Related resources
February 15, 2008 7:01:28 PM

lets hope its not another 2900xt situation. looks really promising though.
February 15, 2008 7:01:47 PM

About freakin' time they raise up the TMU count.

Looks like they didn't raise the rop count. More shaders however. :) 

4850 looks like the sweet spot.

This still won't be able to play Crysis at max detail however. :( 
a c 130 U Graphics card
February 15, 2008 9:08:40 PM

Well if thats anything like accurate then its back to the drawing board again with my build scheduled for about april, I can remember not so long ago being bothered by the relativel ack of new tec, now i finally decide to build they release a new CPU/GPU every other month. Oh well its all good and if the reported specs are close then the fight is really on with Nvidia.
Guess i will just have to draw the line in the sand some where or the thing will never get built. :) 
Definatly time the TMU count went up but i was kind of expecting that it just seemed a logical step to take, would have liked the ROP count up as well but they need to save something for further down the line dont they.
Mactronix
February 15, 2008 9:21:29 PM

Just buy what you can afford. You don't have to wait on the parts every time Nvidia, AMD or intel release something new.

These 4000 cards only compares to Geforce 8 series. Sure it has higher clocks and memory but it's still a 16 rop card with 256bit memory bus. It's not going to make some drastic improvements of what we have now.
February 15, 2008 9:23:03 PM

who knows, the 4870X2 probably could play crysis. 960shaders, 64tmu.
Man thats a high clock @ 1050mhz.
February 15, 2008 9:27:38 PM

marvelous211 said:
Just buy what you can afford. You don't have to wait on the parts every time Nvidia, AMD or intel release something new.

These 4000 cards only compares to Geforce 8 series. Sure it has higher clocks and memory but it's still a 16 rop card with 256bit memory bus. It's not going to make some drastic improvements of what we have now.

How would it not make drastic changes? Can't you see a kick ass midrange here? 4850 with 480 shaders, 32tmu and the 4670 with 256bit bus, 24tmu, 240 shaders. Huge bang for buck if ATI sticks with the pricing. 4670 @ around $100. :hello:  :pt1cable: 
February 15, 2008 9:45:29 PM

480 shaders are still gonna be crap against what Nvidia has shader for shader. Any idea how many the geforce ten will have?
February 15, 2008 9:46:34 PM

aznstriker92 said:
How would it not make drastic changes? Can't you see a kick ass midrange here? 4850 with 480 shaders, 32tmu and the 4670 with 256bit bus, 24tmu, 240 shaders. Huge bang for buck if ATI sticks with the pricing. 4670 @ around $100. :hello:  :pt1cable: 


30-50% difference? Not much really. From 7 to Geforce 8 was a drastic change. Much as 100+% difference.

These new cards cost more anyway. 8800gt is currently $200 after rebates.

You do know Geforce 8800gt has 16ROP 56TMU right? 480 shaders sounds a lot but current AMD cards with 320 shaders can't compete with Geforce 8 with 128 shader. IMHO 320 AMD shader is only equivalent to Nvidia's 96 shader.

Good price is always good on the low end however. About time 256 bit memory is mainstream.
February 15, 2008 9:46:53 PM

Well, forget my plans for a 3870X2. I'll pass and just get a cheap 3850 to hold me over until the 4870 or 4870X2.
February 15, 2008 9:48:02 PM

GPUs at > 1ghz

damn dude, I would've thought they would be power guzzlers at that speed

I hope ATI can deliver
February 15, 2008 9:53:45 PM

turboflame said:
GPUs at > 1ghz

damn dude, I would've thought they would be power guzzlers at that speed

I hope ATI can deliver


Why not? They are already at 800mhz currently.
February 15, 2008 9:57:46 PM

Looking pretty impressive.
Looks like it could give Nvidia a bit of a fright...
February 15, 2008 9:58:33 PM

marvelous211 said:
Why not? They are already at 800mhz currently.


True, but a 250mhz jump is pretty big for graphics cards

I would imagine that ATI would have tried to get the highest clocks possible on their HD3XX0 series to be more competitive with nvidia now, but then again maybe TSMC has refined their 55nm process to get higher yields of higher clocked GPUs.
February 15, 2008 9:59:00 PM

marvelous211 said:

480 shaders sounds a lot but current AMD cards with 320 shaders can't compete with Geforce 8 with 128 shader. IMHO 320 AMD shader is only equivalent to Nvidia's 96 shader.


If you are referring to the performance level, i tend to agree. The 320 shaders of the 2900/38xx series are, as a fact, only 64 complex shaders. The others can only compute simple instructions and are worthless without proper drivers.
I'd really like to see some information regarding the efficiency of those simply shaders, especially if their transistor count is compared to that of their complex brethren.
February 15, 2008 10:52:45 PM

The core clock is high then what i would have expected but the memory is not that high for Gddr 5 hell even gddr4 and 3 go that high
February 16, 2008 12:19:32 AM

bmadd said:
The core clock is high then what i would have expected but the memory is not that high for Gddr 5 hell even gddr4 and 3 go that high


I'm not 100% sure about this, but I think they merely listed the base clock speed, in which case you'd have to multiply it by two to get the effective clock speed. So 1800MHz would actually be 3200MHz, which if I recall correctly has some significance, like what .6ns (or something) memory is supposed to run at. Can someone confirm this part?
February 16, 2008 12:20:51 AM

when are they coming out? maybe i will wait a few more months and build my computer then. not wait for tax return in a couple months...not even use the reenlistment bonus...but after i get all of that, and the lil tax break check later on in may and after i get travel money will i THEN build a new system. i wonder if the price will be a lil more than the 3870 now due to the gddr5. that stuff will be capable of running 5ghz clock, which ddr4 is still pretty good. they need to open that bus to 512 to really harness that, but who wants to pay 400 bucks for a mid end card? not me....
February 16, 2008 12:23:32 AM

Avenger_K said:
I'm not 100% sure about this, but I think they merely listed the base clock speed, in which case you'd have to multiply it by two to get the effective clock speed. So 1800MHz would actually be 3200MHz, which if I recall correctly has some significance, like what .6ns (or something) memory is supposed to run at. Can someone confirm this part?


That would make a lot more sense, though I'm not sure if they labeled the base clock or the effective clock.
February 16, 2008 12:32:34 AM

i took at as the effective. sry if i was wrong to assume that
February 16, 2008 1:18:12 AM

bmadd said:
i took at as the effective. sry if i was wrong to assume that


That's a problem with the information as stated. We can't tell for sure what it is. Further, specs could easily change betwen the current guess of what the cards will be and what will actually appear on the store shelves. As it is, the card series looks interesting. At the same time, if I were interested in buying a card now, I would go ahead and buy it rather than to put the purchase off on the basis of what "might" happen. The final release might be better than it now looks, or just as easily, it might be worse.
February 16, 2008 1:51:58 AM

Radeon HD Core(s) Idle Load Price
4870X2 2xRV770 <25W 250W <$499
4870 RV770 <10W 150W <$299
4850 RV770 <10W 120W <$249
4670 RV740 <10W 100W <$149
4650 RV740 <10W 80W <$129
4470 RV710 <10W 50W <$79
4450 RV710 <10W 30W <$59



$500 for the top tier!!! MY GOD!!!
February 16, 2008 1:58:04 AM

Yeah, $500 is a lot, but it is a dual card and if it competes with the best Nvidia's got out, maybe it isn't so far off the mark. Some people were paying $800 or more for the 8800 Ultras, after all.
February 16, 2008 1:58:13 AM

seems nvidia is gonna have to put out something considerably better to charge there usual amount for there high end.
February 16, 2008 2:59:33 AM

this is supposed to be for '09
February 16, 2008 6:43:27 PM

marvelous211 said:
Just buy what you can afford. You don't have to wait on the parts every time Nvidia, AMD or intel release something new.

These 4000 cards only compares to Geforce 8 series. Sure it has higher clocks and memory but it's still a 16 rop card with 256bit memory bus. It's not going to make some drastic improvements of what we have now.


16 ROPs is more than enough, seeing as all their doing is sending pixels to the screen and not implementing AA like in nvidia cards. 256bit bus shouldnt be a massive issue either with monster mem clocks.
February 16, 2008 6:46:06 PM

marvelous211 said:
30-50% difference? Not much really. From 7 to Geforce 8 was a drastic change. Much as 100+% difference.

These new cards cost more anyway. 8800gt is currently $200 after rebates.

You do know Geforce 8800gt has 16ROP 56TMU right? 480 shaders sounds a lot but current AMD cards with 320 shaders can't compete with Geforce 8 with 128 shader. IMHO 320 AMD shader is only equivalent to Nvidia's 96 shader.

Good price is always good on the low end however. About time 256 bit memory is mainstream.


The kind of leap in performance from 7 series to 8 series shouldnt be expected between every new generation.
February 16, 2008 6:46:51 PM

spoonboy said:
16 ROPs is more than enough, seeing as all their doing is sending pixels to the screen and not implementing AA like in nvidia cards. 256bit bus shouldnt be a massive issue either with monster mem clocks.


Nothing is ever enough. More the merrier. ;) 
February 16, 2008 7:55:50 PM

Hopefully this will bring back the competitiveness that we saw in the 7 series / 1K series cards.
a b U Graphics card
February 16, 2008 8:51:23 PM

What I think is exciting about the R700 is that early reports are indicating that it may be 50% + faster than the R670 chips.

Now think about the 4870x2. 50% faster per core x 2 cores = 100% faster
February 16, 2008 9:16:50 PM

lambofgode3x said:
lets hope its not another 2900xt situation. looks really promising though.


And I just bought the MSI factory overclocked 3870x2!

I seriously do not think it will be another 2900XT. The HD 3870 put ATI back on track, and CrossfireX promises good scaling, even between generations, as long as the cards are matched in clockspeed. A 4870x2 with 64 TMU's would work great alongside a 3870x2 with 32.

Of course, I'll have to upgrade the PSU later, and get a Crossfire motherboard, but next holiday's DX10 games just might be playable with 4 GPU's. Like the power at idle and load too.

marvelous211 said:
About freakin' time they raise up the TMU count.

Looks like they didn't raise the rop count. More shaders however. :) 

4850 looks like the sweet spot.

This still won't be able to play Crysis at max detail however. :( 


Not that I'll be playing Crysis, but a similarly taxing CRPG will arrive; maybe Fallout 3.

I bet two of the 4870x2's will play Crysis, or a 4870x2 and a 3870x2 in CrossfireX.

Maybe even 4 4850's on a 790 board.


rwayne said:
What I think is exciting about the R700 is that early reports are indicating that it may be 50% + faster than the R670 chips.

Now think about the 4870x2. 50% faster per core x 2 cores = 100% faster


Yes, and I feel I got taken for a ride because I'd thought something like the 4870 was a year away. June for cryin' out loud. If I get a 4870 for $299, it's 3 GPU's in CrossfireX that are evenly matched. If I get a 4870x2 it's even better.

But if I'd known, I would have waited for a 4870x2.
a b U Graphics card
February 16, 2008 9:35:32 PM

Well don't feel that you have been taken for a ride. You have thee fastest single video card on the market right now. Considering product cycles for video cards rotate every 6 months your card is still very young.

Just because the HD4870 comes out in June does not mean that the HD4870x2 will be available at the same time. I am guessing it will release 30-45 days after just like the 3870X2 did after the release of the HD3870.

Yeah you did shell out some money for it but you have thee very best. Right now you are pretty much the envy of everyone reading this forum including myself. That is an awesome card you got.
February 16, 2008 10:37:30 PM

Pff I'll believe it when I see it - AMD / ATI has been very disappointing recently, in terms of delivery and performance.

Still, would be nice to see them push nVidia at the high end.
February 17, 2008 3:45:01 PM

rwayne said:
Well don't feel that you have been taken for a ride. You have thee fastest single video card on the market right now. Considering product cycles for video cards rotate every 6 months your card is still very young.

Just because the HD4870 comes out in June does not mean that the HD4870x2 will be available at the same time. I am guessing it will release 30-45 days after just like the 3870X2 did after the release of the HD3870.

Yeah you did shell out some money for it but you have thee very best. Right now you are pretty much the envy of everyone reading this forum including myself. That is an awesome card you got.


The funny thing is I mistakenly believed I only needed the 6 pin PCIe, so it's been sitting by my desk for a week. I ordered it Feb. 1st and it arrived Feb. 6th. On Feb. 15, I ordered an Antec Neo 650 with the 6+2 PCIe for 8 pin power and when it arrives by Wednesday, I'll finally be able to install the card.

I also ordered a nice Antec Nine Hundred case because when I tried the card out in my old case, I had to move one hard drive and it barely fit anyways. Here's the case:

http://www.newegg.com/Product/Product.aspx?Item=N82E168...

So, I'll be the envy of everyone this week. Right now, people can laugh at me because that 3870x2's just sitting there. Except for the original attempt to get it to run with only one 6 pin, I only took it out to show it to a friend who didn't believe it was a 2 1/2 lb. card!

The other thing to laugh at me about? I'll be playing at 1024 x 768 until I can get a 20" LCD on March 15th! If I'd had that 4870 at $299 two weeks ago, then the difference would have almost gotten me the monitor.

It's still worth it, and I'll go Crossfire by the holiday season with a new board and CPU.


February 17, 2008 4:05:20 PM

aznstriker92 said:
How would it not make drastic changes? Can't you see a kick ass midrange here? 4850 with 480 shaders, 32tmu and the 4670 with 256bit bus, 24tmu, 240 shaders. Huge bang for buck if ATI sticks with the pricing. 4670 @ around $100. :hello:  :pt1cable: 


i promise i will bow down to ati and dedicate my "nvidia soul" to meet ati's hunger :bounce:  4850ftw!! god now i m turning into a ati fanboy :pt1cable:  thats how they actualy make fanboys in market
February 17, 2008 4:36:14 PM

ethel said:
Pff I'll believe it when I see it - AMD / ATI has been very disappointing recently, in terms of delivery and performance.

Still, would be nice to see them push nVidia at the high end.

You have seen the 3870X2 haven't you? :ange: 
Far more reasonably priced than equivalent nvidia offerings (GTX & Ultra) on a smaller core process, neater set-up than the upcoming 8800/9800GX2 (or whatever they decide to call it....)
I think ATi are on fire at the moment... :) 
February 17, 2008 8:55:00 PM

spoonboy said:
16 ROPs is more than enough, seeing as all their doing is sending pixels to the screen and not implementing AA like in nvidia cards. 256bit bus shouldnt be a massive issue either with monster mem clocks.


They do their AA for starters with mainly shaders and don't rely on ROPS too much.

Their shaders work like there are like 64 proper ones in their current cards that compare to 128 of the nvidia cards, however they can each do 5 floating point presentations. This 320 to 480 is basically a 50% increase, bringing it up to 96 actual shaders.

On top of that the huge memory speeds will greatly increase bandwidth, no more than 256bit is really needed.


Should see huge speed boosts, over 50% because of 50% increase in shader horsepower and extra bandwidth.

Its also extremely doubtful that nvidias next gen can get anywhere near even 900mhz as they will still be 65nm and have some serious heat issues.

ATI may well steal this one.

PS. is it floating point presentations?? I honestly cant remember the right name of what its called ive gone blank.
February 17, 2008 9:21:56 PM

Do you guys think this new generation of cards will utilize pci-e 2.0? I didn't do much research before buying a new comp and now I'm stuck with pci-e x16.
February 17, 2008 11:27:14 PM

I wonder what kind of 3Dmark score its gonna give. >20000?
February 17, 2008 11:44:33 PM

3870x2 already gives 20k+ if you remove the bottleneck.

Of coarse itll use pcie2 the cards now even use pcie2.
February 17, 2008 11:51:26 PM

Hatman said:
3870x2 already gives 20k+ if you remove the bottleneck.

Of coarse itll use pcie2 the cards now even use pcie2.

crossfire this card and now you get new world records shattering the tri sli
February 18, 2008 1:47:59 AM

Lol, Have you seen the record? It's 30k with 8800Ultra SLI and the Skulltrail platform OCed to 5.2
February 18, 2008 2:02:15 AM

4 months away, and I wanted to get a 9800GX2 should I wait for these now?
February 18, 2008 3:19:22 AM

Hatman said:
3870x2 already gives 20k+ if you remove the bottleneck.

Of coarse itll use pcie2 the cards now even use pcie2.



Well...yeah but PCIe2 doesn't do much for current cards. The question is if they will utilize PCIe2 bandwidth to the fullest. My gut feeling is that it won't change from what we see now.
February 18, 2008 3:24:13 AM

starcraftfanatic said:
Lol, Have you seen the record? It's 30k with 8800Ultra SLI and the Skulltrail platform OCed to 5.2



Don't forget that 3DMark is very CPU dependent so the Skulltrail made those numbers happen and not so much the SLI.
February 18, 2008 5:44:23 AM

If they say that this 770 is 50% faster than 670, then G100 should be faster than this, but multi core AMD seems to be the fastest dog in the hill. The G100 will most propably be fastest single core though, but it's harder to make multicore version of it, because it will be larger than 770.

In anyway It keeps us amased what those two firms are doing. Maybe we will see next year a card that can run Crysis with all bells and trinkets... well maybe not... ;-) but something close.

Ati is now in the same situation than intel is in cpu fronts, It has the node advantage, that makes it easier to make cheaper chips. It can be big advantage. The G100 can be real moster in speed, but if it's too expensive... well we will see.

All in all it seems that ATI has left the bleeding edge products to the Nvidia and consentrates to low, middle and upper middle products. +50 more speed is not enough to beat the next Nvidia top dog in single core format, but Xcrosfire seems to be good enough.

February 18, 2008 7:14:18 AM

Looks like Hannibal Hector hasnt got his finger in the ATI pie, (thank god) AMD need to sit back and take a look at hows it's done, maybe pick up a few pointers from ATI. These guy's seem to be switched on, good to see some top qaulity competition out there.
!