Sign in with
Sign up | Sign in
Your question
Closed

Radeon 5870

Last response: in Graphics & Displays
Share
a c 86 U Graphics card
January 18, 2009 4:35:58 PM

nordichardware reports similar info also

http://www.nordichardware.com/news,8615.html
Quote:
There isn't a salt mine big enough take with this, but we just wanted to notify you that the rumor has resurfaced. Beware.


I like the ending paragraph. lol salt mine
a b U Graphics card
January 18, 2009 4:37:43 PM

It's not true - it's a concept that's already surfaced about 3 times.
Related resources
a c 86 U Graphics card
January 18, 2009 4:41:44 PM

cant edit
those mockup pictures showing the slave chips are probably plain wrong, it was just a guess anyways. I just wish they would stop showing them over and over again. :/ 

the dualchip MCM aproach sounds a lot more plausible at this point
January 18, 2009 6:16:13 PM

dont know how any one can say its not true, just have to look at the 4870, no one thought it would have 800SPs and look how that turned out, if any one can pull this off it would be ATI
a b U Graphics card
January 18, 2009 6:32:11 PM

Ive seen the original post/mockup on B3D. Its a possibility, but who knows, I wouldnt invest any salt in this one tho
January 18, 2009 6:32:18 PM

I estimate 2000SPs and a highend X3 or X4. No source, just my own rumor (I heard it from my buddy).
January 18, 2009 6:35:35 PM

The pics are old, it was just someone with photoshop making a guess at what ATI would do.
a c 86 U Graphics card
January 18, 2009 6:53:21 PM

yeah, the rumor with slave chips and stuff is fanboi speculation, but the post has other specs that rather contradict the second part of the same post and nordichardware sort of agrees with that part of the post.... number of shaders and mcm design... but yeah, lots of salt needed :p 
January 18, 2009 8:41:16 PM

im going to keep an open mind, if its true, go ATI, if not its no biggie
a b U Graphics card
January 18, 2009 8:52:09 PM

Although, using clamshell mode at 16 bit, with either sideport and or hydra? Its speculation, but, if the ram is there for usage, if sideport and or hydra can converge multi core output... who knows
January 18, 2009 9:55:52 PM

remember how the 4870 X2 was supposed to work as 1 gpu... yet it didn't

Honestly lets stop trying to fill our holes with multi GPUs, Last I remember multi GPU was something to extend the life of your product not to make it top of the line:) 

I'll be impressed if we can see the 5870 or the 300 GTX do better than 1 295 GTX...

I remember seeing someting like this in sept...honestly I don't beleive 1 bit, neither do I beleive that the 300 GTX will be 3 times stronger than the 280 GTX lol.

:D  but nice find. Puts faith Back in AMD:)  We'll see what card AMD will come out with next:D 
January 19, 2009 1:47:17 AM

I remember one thing AMD is going 40mm next gen card
a c 106 U Graphics card
January 19, 2009 1:48:02 AM

Actually nVidia should have been in a better position to build a modular architecture like that. They bought 3DFX some time ago which was already laying the ground work for that kind of modular card building with its sage and ravage chips. Then again the last time nVidia incorporated some of the 3DFX technology they ended up with the 5800 :D .
a c 175 U Graphics card
January 19, 2009 3:04:46 AM

The photos I don't believe at all. I also agree that the specs aren't really any better then the 4870 series. 960 to 1000 shaders? The 4870 already has 800, why such a low increase? Bumping up the GDDR5 frequencies is nice, but without a large bump in shaders/shader frequency most of it will be wasted. Core clocks aren't really any higher either. If the 5870 is simply a few more shaders running a hair faster, I won't be impressed.
January 19, 2009 3:24:30 AM

that will give it wat like 10-20 increased % ?

but u do raise a goot point 474545b (man ur name is a trek to type:p ).

:D 
January 19, 2009 8:57:02 AM

I scream BS. I'll bet my hard drive this is a cheap stunt by ATI to throw the green team off.
a b U Graphics card
January 19, 2009 10:45:07 AM

OK, heres a rumor list
The 740 wont be the only card coming soon at 40nm
When I said GDDR5 can run in Clamshell mode, it typically has run in 32 bit, but at 16 bit you can use 2 devices on it, and if the speeds there, which it is, then you can either do 1 or two things. 1. Use only half the memory in a x2 solution (most likely) or use 2 cores sreamed together with a hydra type controller, as itll be seen as 1 core for the driver.

Thats a few rumors about. Theres alot more as well, but its currently too far away yet to cement the most likely possibilities.

Alot of people assumed 2 cores as 1 with GDDR5, but it may be that to have a 1gig x2 card, all youll need is 1gig of GDDR5, run in clamshell mode. All those costs of GDDR5 may actually be a savings down the road.

Looking at the hydra/sideport scenario, its a possibility, so dont scratch it off the rumors list just yet
a c 130 U Graphics card
January 19, 2009 11:04:43 AM

Ever since before the RV770 came out there have been rumours that ATI were gunning for a modular design. It makes perfect sence from a business point of view and on that alone it shouldnt be disregarded.
I honestly do think its comming, dont know when and i wouldnt think its this time around as i would have expected better from the 4 series X2 if they were anywhere near getting the cores to play nice together.
Personally, and to be clear this is all personal speculation, But i would have thought that a X2 that runs as a single GPU would be first. Or are we saying that these cards would need to run in tri or quad mode as far as the drivers are concerned ?
Your thoughts please
Mactronix
a b U Graphics card
January 19, 2009 11:16:27 AM

According to rumor, using hydra and or sideprot, it gives the data stream the appearence of 1 core, whether its 2 or 4, then sends to driver. Rumor has it that it could be on die, or pcb, working with hydra, and using sideport for BW. Its not totally understood at this point, but thats close to what Ive seen. Look up hydra, then go by ATIs comments on the side port, then clamshell mode w/GDDR5, keeping the bus under control, with the rams speeds, and its multi device usage.

Remember, these new cards are all 40nm, so certain things need to be done for sizing, due to pad sizes etc, plus, it also makes sense in that without having to sacrifice speed going smaller on overall die size, no hot spots etc, and also, think about 4 cores, and then wanting 1Gig usage for it. Today wed need 4 gigs to meet that, and itd be a costly scenario as well
a c 130 U Graphics card
January 19, 2009 11:20:21 AM

Thanks for the places to link to JDJ.
Yes the first thing that struck me was oh good we can actually have use of the ram we have and not half it.
I will look at the articles you sugested, seems your saying the hydra chip will be on the GPU and not linking from the Motherboard which was the way i understood it to work.

Mactronix
a b U Graphics card
January 19, 2009 11:26:01 AM

If you think about it, Hydras original chip had to be stand alone. And then, only Intel picked them up. So, whod authorize an on board chip solution?
a c 130 U Graphics card
January 19, 2009 11:36:55 AM

Good point, so that kinda rules Hydra out then dosent it ? What with Intel coming into the market place why allow help to others? Hydra on a Intel motherboard or on the PCB of Larrabee ?
Or is that against the competition laws
Mactronix
a b U Graphics card
January 19, 2009 10:53:58 PM

Intels just an investor, not an owner nor to my knowledge, the leading owner/controller, so Hydra for everyone.
Theres also other rumors which ATI and sideport plus a few other things eliminate hydra itself, and does what hydra does. Time will tell here. MCM and other things are taking shape. Its all a part of fusion, or the very reason AMD bought ATI
January 20, 2009 1:44:09 AM

SpinachEater said:
the master plan

random and irrevelent but pinky and the brain ^
January 20, 2009 4:14:34 AM

JAYDEEJOHN said:
According to rumor, using hydra and or sideprot, it gives the data stream the appearence of 1 core, whether its 2 or 4, then sends to driver. Rumor has it that it could be on die, or pcb, working with hydra, and using sideport for BW. Its not totally understood at this point, but thats close to what Ive seen. Look up hydra, then go by ATIs comments on the side port, then clamshell mode w/GDDR5, keeping the bus under control, with the rams speeds, and its multi device usage.

Remember, these new cards are all 40nm, so certain things need to be done for sizing, due to pad sizes etc, plus, it also makes sense in that without having to sacrifice speed going smaller on overall die size, no hot spots etc, and also, think about 4 cores, and then wanting 1Gig usage for it. Today wed need 4 gigs to meet that, and itd be a costly scenario as well

+1; ATI is going 40nm for sure be able to capitalize on the price/performance especially for low end graphics cards. This sideport/hydra could be very well ATI's ace up there sleeve for high end cards
a c 106 U Graphics card
January 20, 2009 5:00:58 AM

I think that the main problem with this multi-chip design is that it increases PCB complexity which thus increases cost. That's probably why nVidia hasn't done it yet even though they acquired much of the IP to do so when they bought 3DFX. A multi-die solution, like what you have in the 360, might make more sense. I guess we'll just wait and see.
January 20, 2009 6:15:36 AM

megamanx00 said:
I think that the main problem with this multi-chip design is that it increases PCB complexity which thus increases cost. That's probably why nVidia hasn't done it yet even though they acquired much of the IP to do so when they bought 3DFX. A multi-die solution, like what you have in the 360, might make more sense. I guess we'll just wait and see.


nvidia cant put 2 chips on one PCB what makes you think they have the know how to do this, that the reason nvidia hasn't done it, if they could they would but they cant
January 20, 2009 10:25:20 AM

rangers man what did I tell you about talking with out info? lol

How do you kno that? Its not that they don't kno (No information, so really we don't know if they can can't)

putting 2 PCBs together is much easier than doing a dual GPU PCB. It costs less for the company, though they don't charge like it costs less:p .

Come on rangers, atleast act like you are in the middle of both companies you talk from the heart too much.

I'll spoil it for you and tell you that ATI doesn't exist anymore:)  Its all AMD now:) 
January 20, 2009 12:46:28 PM

two chips on one PCB is cheaper than the dual PCB that nvidia has, if nividia could do it they would
January 20, 2009 12:51:02 PM

researching and re manufacturing them is not easier. Any innovation costs to make. Once its made it might be cheaper, but some companies might choose not to risk innovation with chances of it failing.

There have been benchmarks that show the 4870 X2 being slower than 2 4870 1 gigs in CrossX for example.

So really its a situation of safety and stability.

Think about it a all in 1 phone requiring shrinking the material and or taping a phone and a Mp3 player together?

Cost of that 1 phone and 1 mp3 player...making all in 1 costs thousands to research and make sure it works etc.

:)  Logic

Now lets move on, why stop if it works?

3870 X2 vs 9800 GX2 ATI lost

4870 X2 vs 295 GTX ATI lost

Now tell me why would they change if it works???

Remember this card is cheaper than the ATI 4870 X2 when it came out.

January 20, 2009 12:58:39 PM

L1qu1d said:


Remember this card is cheaper than the ATI 4870 X2 when it came out.



ohhh bad reference point. Market price is dictated by supply and demand, not what a previous competitive product has been sold at.
Theres also the minor fact that 4870x2 has been out 5-6 months already.

Of course its the best comparison available currently as both cards are the top of the range models and the pinnacle of what each company can achieve. But, as you reminded rangers its always a good idea to have the facts laid out ;) 
January 20, 2009 1:25:50 PM

ofc thats into consideration, if you read my posts I flame the late coming of the 295 GTX. It might be too late but honestly with the prices here in Toronto, its the better price/Performance buy.

Personally I avoid both, after the 9800 GX2 I just gave up. And I'm honestly surprised that the 4870 X2 drivers aren't what they should be right now.

Kinda makes the 4870 X2 in the secondary a waste of cash.
January 20, 2009 1:58:43 PM

got to agree with you on the X2s, anyone with any sense should stay away
January 20, 2009 2:31:53 PM

L1qu1d said:
researching and re manufacturing them is not easier. Any innovation costs to make. Once its made it might be cheaper, but some companies might choose not to risk innovation with chances of it failing.


you hit the nail on the head there, AMD/ATI innovate nvidia stagnate, if ATI was not there to push nvidia, they would churn out the same old crap year in year out, 8800gtx106.0000386 and a half
a c 130 U Graphics card
January 20, 2009 2:48:10 PM

^+1 Big time thats why its taken them so long to get a decent card out.
Mactronix
January 20, 2009 3:10:16 PM

i dont know man, ATI has never failed me.

I switched permanently to ATI, after the nVidia FX 5200 = major FAIL
January 20, 2009 3:37:20 PM

I switched from ATI after their blunder 2900 XT:) 

They made halves so that they lower the price, although u and me don't agree on them doing that, it doesn't mean it hasn't led to the decline in prices.

I mean the new chips were 65 nm refreshes from 80. which is alot bigger than the 65 to 55 switch.

Yes the FX series was a blunder as well.

The 8800 GTX was def not crap (I kno u didn't say that) considering its still max games better than most cards that are stronger.

8800 GTS was pretty much as fast, used up less energy and was half the price. I'd say thats a win.

Then there was the 8800 GT that came out before which was even cheaper, thinner, and less of a power hog.

The only card I think was a waste was the 9800 GTX and the +, the 9800 GT, 9600 GSO, which are direct refreshes of what we had before (although some had 55 nm )

THe 9600 GT was the card that started the price war (other than the 8800 GT)

where it could be found on release for 150 and only 10% slower than the 8800 GT in a lot of games.

Remember ATI also made the 3870 X2 mistake, and quad fire never did scale the way it was supposed 2 for that card.

and lets not forget that ATI just recently changed their naming schemes..remember GTO, GTS, XL XT, pro, etc etc....

So every1 makes mistakes:D  but The card to go for now is prob the 4850 X2, 260 GTX old or the 4870 512...prices shrunk. (also the 280 GTX went down alot surprisingly over night:p 
a c 130 U Graphics card
January 20, 2009 3:40:10 PM

I dont have anything against ATI in fact i have never owned a Nvidia card, its just that sometimes price/performance has to come first.
I wouldnt have either card myself as i am of the single card single chip = best most reliable performance mind set.
I dont need anymore GPU power than i can get from a single core anyway.

Mactronix
January 20, 2009 3:49:43 PM

i read the reviews of the 2900xt and thought, ill skip this gen, the heat on the thing and the 8pin PSU requirement put me off, but i understand that some where stupid enough to buy it, thats there fault not ATIs
January 20, 2009 3:58:07 PM

i think i just called you stupid, sorry liquid
a b U Graphics card
January 20, 2009 5:32:57 PM

OK, first of all, if nVidia had what ATI uses, theyd most likely use a single pcb. Since they arent using GDDR5 , its less benficial to do so for them, tho ATI can and does. GDDR5 doesnt need the elaborate layout on a pcb that GDDR3 needs, thus a cost and space savings is incredible.
Now, sticking with GDDR5, not only are the tracings not needed to be exact, the other benefit of it is bandwidth. Since nVidia isnt using it, its stuck using much larger bus' , just to get the same bandwidth. Having all those tracings for that bus, plus then, having to route them so the power resistance/usage is the same for the GDDR3, theres over twice as many tracings that all have to be precisely laid out. So, L1qu1d, nVidia cant do what ATI is doing, itd be impossibly expensive, or the pcb would be too large. Id suggest you read up on GDDR5, as well as GDDR3, and see what Im talking about.
The 4870x2 seems to be just fine, and those drivers, Ill remind you just 1 more time, theyre coming, and soon, and the potential in DX10 will be greatly enhanced. I wouldnt deny this, as weve already seen some of the improvements, where the 4870 and the x2 edges ever nearer the 280 and makes the 295 so so.
Long before the launch of the 4xxx series, everyone knew the pricing, and some were let down, the thinking was, at these prices, how good could these cards be? So, again, read up on a few things, and dont go pointing nVidia to places it either doesnt or cant belong
January 20, 2009 7:03:44 PM

rangers said:
i read the reviews of the 2900xt and thought, ill skip this gen, the heat on the thing and the 8pin PSU requirement put me off, but i understand that some where stupid enough to buy it, thats there fault not ATIs


Well I was a stupid fanboy then:)  I bought it, didn't like it, so I went to exchange it for a 8800 GTS 320, which ran better with a 200$ cheaper price tag.
January 20, 2009 7:13:18 PM

JAYDEEJOHN said:
OK, first of all, if nVidia had what ATI uses, theyd most likely use a single pcb. Since they arent using GDDR5 , its less benficial to do so for them, tho ATI can and does. GDDR5 doesnt need the elaborate layout on a pcb that GDDR3 needs, thus a cost and space savings is incredible.
Now, sticking with GDDR5, not only are the tracings not needed to be exact, the other benefit of it is bandwidth. Since nVidia isnt using it, its stuck using much larger bus' , just to get the same bandwidth. Having all those tracings for that bus, plus then, having to route them so the power resistance/usage is the same for the GDDR3, theres over twice as many tracings that all have to be precisely laid out. So, L1qu1d, nVidia cant do what ATI is doing, itd be impossibly expensive, or the pcb would be too large. Id suggest you read up on GDDR5, as well as GDDR3, and see what Im talking about.
The 4870x2 seems to be just fine, and those drivers, Ill remind you just 1 more time, theyre coming, and soon, and the potential in DX10 will be greatly enhanced. I wouldnt deny this, as weve already seen some of the improvements, where the 4870 and the x2 edges ever nearer the 280 and makes the 295 so so.
Long before the launch of the 4xxx series, everyone knew the pricing, and some were let down, the thinking was, at these prices, how good could these cards be? So, again, read up on a few things, and dont go pointing nVidia to places it either doesnt or cant belong


Well thats what I said:)  I said it wouldn't benefit I suggest you read up on what I wrote. Them to make a dual PCB requires the innovation they don't have. It would require changing the chip into something that could work.

So to me if its not broken and it works better don't change it.

Is the 4870 X2 going to edge out the 295 GTX with the driver updates? No Thats my answer because I've always been told "oh the 8.11s will do it....next the 8.12s...then the hotfix" I'm sorry but right now I'm not getting my hopes up and i'l say lets just wait for it.

to me power is described by a single GPU.

2 GPUs are a way of extending life, not making it your primary life.

Omg I am getting 26 fps in a game...add in a 2nd card...omg I'm getting 35 now! this will hold me till I can save up money for a new generation card or till something is appealing:) 

Thats what a 2nd card is supposed to be...or if your an AA junky or just a plain enthusiast:) 

Anways I think I'm done here, the ATI ppl over way the Nvidia or neutral ppl so I can't get any arguments out.

I'm never making nvidia look amazing or saying they are bad...I'm just saying both companies approach the same goal from a different point of view.

Remember fanboys, to Love ATI is to love AMD:)  They come hand in hand now:D 
a b U Graphics card
January 20, 2009 7:52:26 PM

I did, thats why I wrote what I wrote
L1qu1d said:
rangers man what did I tell you about talking with out info? lol

How do you kno that? Its not that they don't kno (No information, so really we don't know if they can can't)

putting 2 PCBs together is much easier than doing a dual GPU PCB. It costs less for the company, though they don't charge like it costs less:p .

Come on rangers, atleast act like you are in the middle of both companies you talk from the heart too much.

I'll spoil it for you and tell you that ATI doesn't exist anymore:)  Its all AMD now:) 

Youre saying its more expensive to use a single pcb, as well as easier using 2, which is both wrong. The heating/cooling solution (which is the most important part of final design) is very difficult to manage, thus the unheard of sounds heheh made by the 295. Then theres costs, which is skyhigh as well, as its using 2 pcbs vs 1, with twice as many tracings to boot. Plus, youre using a gpu thats twice the size of the competition, more expensive bus, and theres 2 of them, more expensive boards, plus theres 2 of them. All that vs slightly more expensive ram. nVidia is taking a killing just trying to sell these things
January 20, 2009 8:21:22 PM

I'm saying that THE INOVATION COST MORE for the company for god's sake not the actual parts.

Yes it costs more to have 2 things, did u not see my cell idea...even though ur paying more for 2 parts to put together, it costs less then designing a custom phone to make at first...you kno a prototype...the parts, labour and research. Not the actual selling of the product!!!!!

Thank you!!!!!!!!


oh yes from what I understand GDDR5 neads bigger bus width which the Nvidia cards will have other the ATI, although it would cost more. I remember reading that GDDR5 is a lot better, and cheaper to make, but i'm not sure about hat.

Given that, I think that the GDDR5 is wasted on a 256 bus width.

I'll say this again, we're not defending companies we're defending facts:) 
And most of the things I'm hear about the future, are just theoriess (9.X)

Let me restate my main argument in easier words.

Innovation > (cost) Super glue 2 pcbs

2 pcbs (cost) > 1 dual GPU PCB

I = sqrt(GOD)

Now then thats all I wanted to say, nothing about how bad ATI is, I only say that either company is bad to upset fanboys:D 

So when i see some1 say OMG nvidia is amazing I'l start going for ATI, and when I see some1 saying oMG Nvidia sux the other way around:) 

Its really fun, and I learn alot sometimes.

But thank you for the suggestion to read about GDDR5 it was a fun and time consuming read:) 

Now if you'll excuse me, I think i'm going to go celebrate my Birthday by going to buy an ATI card and Nvidia Card and smashing them to make a 5850 GTX CroSSLI :D 

Bye Bye!
!