Sign in with
Sign up | Sign in
Your question

Ati 5870

Last response: in Graphics & Displays
Share
September 11, 2009 9:25:23 AM



More about : ati 5870

September 11, 2009 10:02:46 AM

obviously the table you pulled out is outdated.
We know the Memory is running at 5200 (1300x4) and the Core is most likely 825.

Related resources
September 11, 2009 11:32:37 AM

What are the power requirements for this card?

I am thinking to upgrade from my old GTX295.
September 11, 2009 11:54:08 AM

^^ Why? I'm seeing around 295ish performance. Not a bad increase, but not nearly as much as the double performance some people speculated.

Of course, I'll be glad to take you're 295 off you're hands if you're serious :D 
September 11, 2009 12:03:58 PM

gamerk316 said:
^^ Why? I'm seeing around 295ish performance. Not a bad increase, but not nearly as much as the double performance some people speculated.

The 5870 it will be around 20% faster than the GTX295, and if I sell my GTX295 the 5870 is not going to cost me to much.
September 11, 2009 12:34:53 PM

give me your 295 i'll use it for physics on my future 5870x2 crossfire setup
September 11, 2009 12:41:57 PM

Obviously I'd be surprised if 5870x2 would bottlneck in Pci-e 1.x .ATI should seriously consider going pass 250gb/s I mean it deffinetly wont be a nice move.
Anonymous
September 11, 2009 12:44:52 PM

nv cant lose in this battle, they will do anything they can to be the first, they just cant be in the second place. Intel and Nvidia will always be better than Amd and Ati im not prefer any of these but that is a fact.
September 11, 2009 1:15:38 PM

obsidian86 said:
give me your 295 i'll use it for physics on my future 5870x2 crossfire setup


When is the 5870x2 coming out? The same time as the rest of them on the 22nd?
September 11, 2009 3:06:44 PM

Quote:
Intel and Nvidia will always be better than Amd and Ati im not prefer any of these but that is a fact.


dohoho, AMD and ATI have beaten their competitors several times in the past. Die hard fanboy detected.

Nvidia probably will have a faster GPU since the're going for the monster die approach again, but ATI will probably have the performance crown since it's doubtful that Nvidia could do a dual GPU card based on current G300 rumors.
September 11, 2009 4:55:15 PM

^^ Why would they want to though? Poor sellers (all things considered), and low margins. Really, I don't think we'll be seeing many more duel GPU cards...
September 11, 2009 5:21:47 PM

gamerk316 said:
^^ Why would they want to though? Poor sellers (all things considered), and low margins. Really, I don't think we'll be seeing many more duel GPU cards...


Most high end cards are poor sellers, it's for the enthusiasts and bragging rights. Like the initial 8 series had an absolutely awesome high end but the mid and low end lineup was complete crap, yet people still bought 8600GTs (and still are buying) since Nvidia was out pacing ATI in the high end.

Dual GPU cards make sense since they offer a model with higher performance without the R&D / production costs of making an entirely new GPU and don't require a crossfire / SLI mobo.
September 11, 2009 5:25:59 PM

Sure the dual GPU is a poor seller, or at least not as many as the 5870, there is still a market for it. It is called the enthusiast segment. This segment will purchase regardless of the insane price (of course to some degree), that means that they can afford to sell less and still cover the cost of R&D. Moreover, how much would they cost to really R&D anyway? They have done 3870x2 and 4870x2, so they can easily do a 5870x2 at a very low cost.
Anonymous
September 11, 2009 6:15:54 PM

who has the the best card he has the market because many people ask what card is better...because that is important that battle for the enthusiast segment. nv just cant be at the second place remember that...and the high end cards dictate the price of all segments...
September 11, 2009 10:16:26 PM



Add to that...


GF6, GF7.... which were both slower than their later counterparts but about even at launch.

nVidia wasn't the top end of the pile until the GF8, before that it was the GF4.

Except..... when used in SLI, and thus the reason for the dual GPU cards which as mentioned is primarily for the Halo effect, and which is very effective, hence all the effort spent on SLi and Xfire, etc.

If nV can't get the G300 on a single slot (which seems a safe bet for the launch die until they shrink the process), then you will likely once again see ATi having something in the lead from top to bottom, like they did once the 4870X2 was launched.

The most important thing is how long it takes nV to get their mid-range DX11 out, and from the looks of things, that may be late Q1 or early Q2 2010.
September 11, 2009 10:28:12 PM

Nvidia WILL NOT allow ATi to have the video card crown. They always find a way to produce the fastest card.
The problem is that their mid range card always fail in the price to performance ratio.
September 11, 2009 10:31:42 PM

soundefx said:
They always find a way to produce the fastest card.



Read the three posts above yours.
September 11, 2009 10:52:23 PM

I'm sure the Nvidia ninja's have been dispatched to ATi's HQ, victory is imminent. ;) 
September 11, 2009 11:11:52 PM

turboflame said:
Read the three posts above yours.


I did. My comment wasn't meant to be read as if it was going on forever, but as of right now. It might have been an ego thing, but once nvidia got back the crown with the 8800 Ultra and GTX, it seems as if their goal is to remain there and forgot about the main stream pricing and performance ratio.
September 11, 2009 11:18:50 PM

soundefx said:
I did. My comment wasn't meant to be read as if it was going on forever, but as of right now. It might have been an ego thing, but once nvidia got back the crown with the 8800 Ultra and GTX, it seems as if their goal is to remain there and forgot about the main stream pricing and performance ratio.


Except that they lost it again with the HD4870X2, and were out of top spot for 6 months, so it's not like they haven't lost that in recent history too.

This time it's unlikely a dual-GPU 295-esque G300 is anywhere on the horizon to help against the HD5870X2, and I'm doubtful a single G300 will beat that.
September 12, 2009 5:34:48 AM

Don't get me wrong, this time I think ATi made Nvidia actually develop new technology to be able to beat them this gen., hence the slow development.

As TGGA said, it took them 6 months to beat the 4870x2, so there is no telling IF or how long it would take them to make a 5870x2 killer.

My only point was that while they are trying to produce the video card King, they are loosing the main stream battle.
September 12, 2009 6:08:31 AM

They will try to kill the HD5870, just like ATi tried to make the G80 killer in the HD3870X2 and failed (in most people's opinion [including mine]). But that's not the same as not allowing them to be king, they will sure try to keep them from remaining King, but from what we've seen over the past few days, they are about to become king, and if there is no G300GX2 then they will likely stay there for a while.

IMO if they don't beat it out of the box with the G300 (which is still very possible), then they likely won't bring anything multi-gpu to market fast enough to beat the HD5870x2 before ATi brings their own replacement/refresh to market.

And you're right about them losing the mainstream battle. which I mentioned in my reply to Tubo; think about the fact that there is STILL no mid-range offspring of the G200 series, so it's hard to think that they are going to have any G300 mid-range parts until a while afterwards, leaving ATi alone in that market for anyone not wanting an Geforce FX-like future for their card.
September 12, 2009 6:46:38 AM

gamerk316 said:
^^ Why would they want to though? Poor sellers (all things considered), and low margins. Really, I don't think we'll be seeing many more duel GPU cards...

Im thinking that the G300 may be the last huge monolithic monster made, and duals and such are the way of the future.
It all depends on the tech used too
September 12, 2009 6:53:16 AM

I think you also need to think about the way they go about it, and who you include in that list (Larrabee still seems to be in a similar direction for Intel and may end up being the last to get the hint and produce a few TERRAlithic monsters :o  ).

I personally see the multi-die + single package as still being the best design strategy, the problem of course as we've know for a while is in getting it to work as well as it does for the CPUs.
Anonymous
September 12, 2009 9:18:59 AM

Nv will always has big developers on their side "The Way It's Meant To Be Played program" will always providing the best gaming experience for nv cards.
September 12, 2009 9:33:12 AM

Like they did in the FX generation, eh ?

There is little to guarantee that nV will have the same amount of influence this time, especially when like the FX series they are the second to the table with the spec.

For SM4.0 and SM3.0 nV brought hardware out first, so it somewhat dictated development, with SM2.0 ATi brought the R9700 to market first and it dictated alot of early DX9 game development, so there's little history to go on nV continuing their influence when they have no product in the marketplace.

Also nV's influence is waning as intel prepares to inject itself into more dev teams as well (either buy paying devs or buying them like Project Offset), although AMD is still the red-headed step-child, even just the recent events show they aren't going to be holding back like in the past, even if they might not still match the other players in the field.
September 12, 2009 9:47:37 AM

JAYDEEJOHN said:
Im thinking that the G300 may be the last huge monolithic monster made, and duals and such are the way of the future.
It all depends on the tech used too

Why should there be a distinction? A GPU isnt a discrete processor like a CPU. Its massively multicored anyway. I think that is where Nvidia's break dancing president thinks he is going to jump ahead of the rest of the world. Instead of the concept of descrete GPUs you just have single logical bank of cores to hypermultiprocess the crap out of any job thrown at them, not just graphics. CUDA it work?
September 12, 2009 10:41:05 AM

JAYDEEJOHN said:
Im thinking that the G300 may be the last huge monolithic monster made, and duals and such are the way of the future.
It all depends on the tech used too


Agreed on the first sentence.

Disagree on the 2nd (unless your talking about inter-chip communications).


As dndhatcher says, its massive amounts of homogenous cores. Get the 2/3/4 discrete GPUs (chips) talking properly, and it should be seamless.


When that happens, the large monolithic core is dead. Why make 1 large chip and disable parts of it for the mid-range market when you can make 1 small chip for the low range, add another for mid, and add another for top.


Far cheaper in R&D and manufacturing. The whole product line can also be got to market far quicker.
September 12, 2009 10:58:43 AM

dndhatcher said:
Why should there be a distinction? A GPU isnt a discrete processor like a CPU. Its massively multicored anyway. I think that is where Nvidia's break dancing president thinks he is going to jump ahead of the rest of the world. Instead of the concept of descrete GPUs you just have single logical bank of cores to hypermultiprocess the crap out of any job thrown at them, not just graphics. CUDA it work?


Dude, did you miss the last generation of chips?

BIG = BAD

Lower raw yield per wafer, higher failure rate, higher costs to develop a larger chip, resulting in much higher cost per chip to make, also giving you less attractive SKU options more often using chips in low priced cards rather than doubling them up for a healthy mark-up.

Also without an X86 core there's only so much they can do that requires that kind of power; and CUDA isn't enough to make them money in the Enterprise & Government market.

Smaller dies with better cost structures and more options is a far better solution, as was proven last round, where nV hemorrhaged money on the first gen of G200s while ATi had control on pricing due to their lower costs and greater flexibility.

This may help bring you up to speed;
http://www.anandtech.com/video/showdoc.aspx?i=3469&p=1

That also relies on the idea that there are no major external forces, like a bad DX implementation or problematic process @ TSMC (like 80nm HS and the first gen of 40nm) which can hurt the best of intended strategies.
September 12, 2009 11:07:01 AM

Amiga500 said:

As dndhatcher says, its massive amounts of homogenous cores. Get the 2/3/4 discrete GPUs (chips) talking properly, and it should be seamless.


Actually I think you're confusing who's saying what, JDJ is for the multiple GPUs acting as one but being designed, built and implemented as small individuals, not vice versa. That's how the last generation went, and we've discussed it a few times, the last time before this probably being the release of the Anand article.

September 12, 2009 1:22:23 PM

Does anybody know when we are going to see proper benchmarks, reviews for the 5870?

The 5870 is going to be release in two versions 1GB and 2GB? (like the 4870 512MB and 4870 1GB)?



September 13, 2009 2:12:12 AM

TheGreatGrapeApe said:
Dude, did you miss the last generation of chips?

BIG = BAD

<minutiae>

You are just looking for excuses to argue with me. Amiga500 understood what I meant.

I said it doesnt matter and shouldnt matter. Virtualize it into one logical unit and no one cares how the hardware is manufactured. If you virtualize it correctly, the entire concept of an x2 GPU or even multiple GPU cards is irrelevant. Making one huge virtual screen out of 24 physical monitors and one large virtual bank of GPU power out of 4 graphics cards (or 4x2 cards, who cares) frees hardware manufacturers to do whatever they need to without forcing software developers to duplicate their work for multiple hardware configurations.

michaelmk86 said:
Does anybody know when we are going to see proper benchmarks, reviews for the 5870?

The 5870 is going to be release in two versions 1GB and 2GB? (like the 4870 512MB and 4870 1GB)?

One of the articles said the NDA is up in about a week.

IIRC a 5870(1GB), 5870(2GB) and 5850 are the ones they officially talked about shipping.
September 13, 2009 5:36:38 AM

dndhatcher said:
You are just looking for excuses to argue with me. Amiga500 understood what I meant.

I said it doesnt matter and shouldnt matter. Virtualize it into one logical unit and no one cares how the hardware is manufactured.


No, you toss around terms you don't understand in reply to comments you obviously don't understand, and I doubt Amiga500 would agree with you if he really looked at what JDJ said and then looked at your reply again. It's like you're saying it doesn't matter how you mfr a part, if you use flubber and magic it'll be better, we just need to learn how to get the right balance of flubber and magic. :sarcastic: 

JDJ is talking about current manufacturing process benefits and strategic choices, and there is does make a difference, a huge difference. Virtualization is not a replacement because of hard-set architectural barriers and advantages to chip making, things like i/o interfaces and latencies, memory alone is a major issue. It's fine for supercomputing whose requirements are not bound by latency just raw large scale computing that take a long time to compute things that may be measured in 1x10-15s but it doesn't need to do it in a set time or else throw out the result; whereas for GPUs, you need speed and efficiency for the power you want, and from a manufacturing perspective you want better economies of scale to maximize your available resources as well as make sure you're not paying much more than the other guy to build your solution.
Virtualization helps when there is no other option, but it not practical in a discussion of base chips. You can try to virtualize an HD5870 with a bunch of HD5300 chips, but the number of chips required to equal that makes it less practical. The barriers for inter-VPU communication and even memory communication would make it impractical to implement, especially since there needs to be a single point of communication with the CPU, in which case you end up with another bottle neck, or else you have them all communicate and then you have oversaturation and duplication. Also the SW overhead is much larger where the CPU has to manage the virtualization, thus meaning you need a more powerful CPU to virtualize a more powerful GPU or you need to build dedicated hardware to manage it, neither of which is attractive if it means more resources need to be added outside the current production line. Just like you can emulate (errr virtualize) DX11 hardware in a DX10 card, it still won't be as fast as actually having the hardware resources to do it within the chip.

Quote:
If you virtualize it correctly, the entire concept of an x2 GPU or even multiple GPU cards is irrelevant. Making one huge virtual screen out of 24 physical monitors and one large virtual bank of GPU power out of 4 graphics cards (or 4x2 cards, who cares) frees hardware manufacturers to do whatever they need to without forcing software developers to duplicate their work for multiple hardware configurations.


They don't need to work on multiple hardware configurations, that's why we have DX and OGL, and then tweak after the fact, and whether you virtualize it or not you still have to change how the application in conjunction with the drivers handles the workload. Some software developer has to work on your virtualized model, either the game dev or else the IHV's driver team.
And your example of a virtual screen out of 24 physical monitors, once again is a perfect example of how it's nowhere near as good as a single monitor with the same resolution, but also you have to ask if the tradeoff of that 24 monitors worse than a single monitor of 1/6 of the resolution with an improved DLP projector? If you've ever watched a movie I think you'd find the single 50ft screen showing a soft 12MP (4K) image (or even 4MP [2K] image) would be far more attractive than a 20' wall of 24 monitors showing a total of 55MP that have distinct seems to them.

The goal is to remove the seems, and that's the flubber and magic again.

SO in short, you answer is like saying, "does it really matter, we're going to be moving to nano-tubes and optiocal processors and the whole process will change...." That's all well and good, but totally irrelevant to the near term context JDJ was talking about.
September 13, 2009 9:37:46 AM

dndhatcher said:
Virtualize it into one logical unit and no one cares how the hardware is manufactured.


You will care because you pay for it in costs.


A smaller chip is easier to design.


A mid range composed of 1 or 2 small chips* is cheaper to make than a midrange composed of one crippled gigantic chip**


*where you are making full use of the transistors fabricated.

**where you are only using half the transistors fabricated.



Those are two fundamental reasons why small chips are better.... assuming the discrete chips appear seemlessly integrated to the software.
September 13, 2009 9:45:28 AM

one interesting development with the hd5800s is that ATI finally abandoned its sweet spot strategy which ran for 2 gens. no more intentionally handicapped (although not necessarily slow) cards to "fit" a certain price segment (150$/250$). this is like the x1950xtx, pure unadulterated fun. you'd actually believe ATI is more than willing to deliver a killing blow.
September 13, 2009 9:56:10 AM

I agree to a point. Their highend is supposedly much larger compatatively than the last few gens, which will really hurt nVidia here, but I also see this as a more forwards looking design.
The 40nm process was the process from hell, much like the 80nm.
Going forwards, the 32 and 28 nm processes are seen to be much easier handled, and a possible shrink to those nodes could come much quicker this time as opposed to the 55 to 40nm time, so, even at 330mm , at say 32nm, were right back in the fold again
September 13, 2009 8:24:14 PM

Amiga500 said:
You will care because you pay for it in costs.
<etc>

Thank you Amiga. That was sort of my point.

Your post has clarified to my why I am having so much problem explaining my perspective. Everyone here seems to be thinking at a hardware manufacturing level and I look at it from a software development view. Software is always the slower component of computer technology.

What I mean by no one cares is that I can code once and dont have to worry about the underlying hardware.

When I write a database, I dont have to write a bunch of code to look at how many hard drives there are and figure out how to split up data between them, RAID handles that for me. Seeing that happen with video is to me far most significant than any single generation processing power increase.



Yes, GrapeApe, you are correct. I'm thinking long term, not just at next month's new toy.
September 13, 2009 8:43:24 PM

Long term past Larrabee, and still you have to think about what runs it and how that's made.

Your problem is that you're thinking database; when you need to think instructions and dependent functions. A GPU is not simply accessing pictures or textures and displaying them, they are building images from thousands of components that are being run through thousands of operations, hundreds of which are dependent on shared inputs & outputs, so access to shared resources like registers and caches is a major component in making the chips faster the tougher the workload gets where splitting the resources decreases the efficiency significantly.
September 13, 2009 11:18:59 PM

soundefx said:
Don't get me wrong, this time I think ATi made Nvidia actually develop new technology to be able to beat them this gen., hence the slow development.

As TGGA said, it took them 6 months to beat the 4870x2, so there is no telling IF or how long it would take them to make a 5870x2 killer.

My only point was that while they are trying to produce the video card King, they are loosing the main stream battle.


And you just explained the power of competition, companys keep having to improve on their technology to get an advantage over the other. But what you are saying is complete stupidity. Nvidia will be better than ATI at times, and ATI will be better than nvidia at times, its only a matter of time before the other comes out with something better, stop thinking like a fanboy...
September 14, 2009 10:47:32 PM

computernewbie said:
And you just explained the power of competition, companys keep having to improve on their technology to get an advantage over the other. But what you are saying is complete stupidity. Nvidia will be better than ATI at times, and ATI will be better than nvidia at times, its only a matter of time before the other comes out with something better, stop thinking like a fanboy...


I am not thinking as a fan, I am simple stating what have been happening. I have cards from both companies, so how can I be a fan of only one?

The point I was trying to make is that nvidia, because they were ahead of ATi was simply renaminging their old cards and passing them off as new. I refuse to go over all the 8800 ---9800---gts names and specs., but that is what they did. Did Ati do the same, at times, yes, but they did bring dx10 and 10.1 to their cards. Now they are bringing eyefinity and a great performance boost from the previous gen. nvidia refused to move to dx10.1...

Now that dx11 is coming out, guess what, ATi is already there waiting and where is nvidia?

I am NOT saying that ATi is a a better company than nvidia, all I am saying is that this time ATi is pushing nvidia hard to produce the video card crown. Nothing more, nothing less.

Oh, and before you call someone a fan or stupid, make sure you know what card they have and check and make sure that your IQ is higher than 5. That way you wouldn't come off sounding like a monkey on crack. Even though I compared you to a monkey, I think I insulted the monkey.

Sorry mods, I hate being called stupid.
September 14, 2009 11:03:57 PM

Stupid is as St00pid does, life if like a box of Bananas ... [:thegreatgrapeape]

Now everyone chill, and don't make me throw the Box at y'all !!
September 14, 2009 11:15:05 PM

I'll simply duck.
October 20, 2009 3:16:36 PM

I am enjoying all the verbage for sure! One thing I know ATI will be "king" for a time, when it will be nVidia. If it wasn't for this competition I'm sure improvements would come at a much slower pace. I currently have 2 GTX 295's and I am almost 100% sure that I will sell them when the 5870x2 hits the stands. I believe that atleast 20% of an nVidia card price is for name recognition.
!