Sign in with
Sign up | Sign in
Your question

RV770 up to 50% faster than RV670, launch in June

Last response: in Graphics & Displays
Share
a c 84 U Graphics card
February 8, 2008 9:29:39 PM

http://www.nordichardware.com/news,7327.html
excerption
Quote:
The current state of RV770 of indicates that it should perform up to 50% better than RV670, which is pretty much what we're expecting (since this is the kind of leap you see with every generation...). The best thing about this is that ATI is on schedule for once and we should see the card in June, expect leaked benchmarks in May.



I wonder if they are still going to make the rumoured 4 core performance version (1 core for entry-level, 2 for mainstream) and if that 50% is core vs core kind of comparison. That would make one hellishly fast gard. (is that even a word :pt1cable:  )


first thread I started on toms forums :D  yay!
February 8, 2008 9:51:55 PM

If I'd have known that R770 was allegedly only 4-5 months away, then I would have held off buying a 3870x2. I'd thought an incrementally improved R700 was six months away and R770 multicore GPU was a year away.

I bought the MSI overclocked version and their tech support asked me to send them photos of the PCIe adapters included, because I can't get any video from the card with only the 6 pin PCIe. Since overclocking a 3870x2 requires an 8 pin PCIe as well, I bet the factory overclocked version needs it too.

Well, my Antec Neo 550 only has 2 six pin PCIe and MSI only included two "2 molex to 1 six pin PCIe" but they did not include an 8 pin PCIe adapter. If worse comes to worse, I'll get a 1000 watt Antec that has an 8 pin connector next Friday.

Well, once I get it working, it will still be a good card. The 3870x2 is about 46% faster than the 3870, so a 50% faster R770 won't be all that much better by itself, but it might be worth getting a 790 board for Crossfire. For that, I will need a 1000 watt Antec.
February 8, 2008 10:08:38 PM

personally think multi cores are a good idea.

Having low powered multi chips on a PCB is the future.
February 8, 2008 10:10:17 PM

I'm glad ATI's coming back up. Might even go with ATI if this keeps up.
February 8, 2008 10:28:23 PM

Pffft, I rushed my order for two 3870x2 and still unable to crossfire them. I don't care about the RV770, hurry up with a working quadfire driver lol. March is too far away, the aka 8.3 driver should have been released with the 3870x2.
February 8, 2008 10:29:23 PM

Evilonigiri said:
I'm glad ATI's coming back up. Might even go with ATI if this keeps up.


Well it looks like I need to get an Antec PSU that has a 6+2 pin PCIe adapter, because, though the Antec Neo 550 is listed as a single card solution from ATI, it doesn't have that. MSI should have included the adapters and I bet they did on some boxes.

Here's my technical support post to MSI and their response. My first request is at the bottom:

Quote:

Type Created Date Problem Note Attachment
MSI Tech. 02/08/2008 This is a special 8 pin plug required on high end PCI-E Gen 2 card but unfortually, we do not have it in stock now so if you can, have the card replaced from your purchase vendor.

End User 02/08/2008 Okay, here are three photos in a zip. Don't you guys know what adapters are included? If overclocking a 3870x2 requires an 8 pin PCIe connector, then your factory overclocked card should require one. I'm not sure why one wasn't included in the package. I sure wish it had been designed by AMD to just use two standard PCIe adapters.

MSI Tech. 02/08/2008 Please kindly take a picture of the power adapter included in the package and attached to this form using zip format so we can double check, thanks.

End User 02/08/2008 I just bought an MSI 690V motherboard and an MSI 3870x2 GPU. I can't get video to work on the GPU. I have the 6 pin connector attached, but not an 8 pin. The only 8 pin connector on my PSU is not a PCIe connector. You only have two adaptors for 6 pin power. If this is a factory overclocked card, don't I need an 8 pin adaptor? I have two 6 pin adaptors with my PSU, but no 8 pin PCIe. Can you provide me one? Right now, I'm using the IGP, which isn't good enough. Please advise me about this card. The Antec Neo is listed as a recommended PSU by AMD. Thanks. One last question, when I tried to register the two products, I got a message "Registration is for End User Only". What's up with that?


There's no way I'm taking a 15% restocking fee at Newegg, so I'll just get a new power supply. I do not think that MSI should sell overclocked cards if they can't include an adapter for the situations where people have AMD rated power supplies that meet MSI's requirements, but do not have the 8 pin PCIe.

Sigh, no 20" monitor for me next week! It will be a $199 Antec TPQ 1000!


February 8, 2008 10:41:36 PM

You could just buy a 8-pin adapter.
February 8, 2008 11:04:07 PM

IndigoMoss said:
You could just buy a 8-pin adapter.


I'd rather get a 1000 watt PSU so that when the R770 arrives, then I can go Crossfire with one dual GPU and one multi-GPU. This 690V board is a stopgap because the older Nvidia board had "chipset limitations" that prevented the use of ATI cards. I'd really wanted to get a 790 board, but the way Phenom's going, I'll get a Q6600 and a Crossfire board. So, I might as well get the $199 PSU next week so I don't have to spend more in June.

I still think MSI should have made it clear that their card needed an 8 pin power connector. I'd suspected it did, because it's factory overclocked, but I'd assumed they'd include the adapter. They did list two power connectors under included items, but who'd have guessed it would be just two regular PCIe?

Sneaky weren't they?


February 8, 2008 11:52:12 PM

Quote:
i wouldn't hold your breath, also, last i checked gpu's just expand, they do nto add cores, multi gpu yes, multicores no. i think what is meant byut the cores thing is that since gpu's are symetrical in nature to an extent they can just disable different quadrants of the gpu if they are defective and then sell them as low cost parts.

also, why so much power needed or is it just for connections?


Yes GPUs are highly parallel processing units, however, multiple GPU cores in the same die/package could scale much better than multiple GPU processors with the proper architecture and logic built into the GPU multi-core. This might also require less driver optimization. This is where AMD's processor division can aid in the evolution of the GPU and we all know that multi core GPUs are part of their path to 'fusion'.

Lets hope the high end RV770 (xt?) is really a 4 core offering. I think 2 cores are more likely.
a c 143 U Graphics card
February 9, 2008 1:54:48 AM

IndigoMoss said:
You could just buy a 8-pin adapter.


$5 on BFG's site :) 
February 9, 2008 2:45:51 AM

aevm said:
$5 on BFG's site :) 


Probably the same at Fry's or Newegg, but buying a thousand watt PSU is more fun. I still think that MSI should include the adapter if they're going to factory overclock. Still might get one for this week, but I don't have too many molex's left. It would have to use just two like the 6 pin they included that I don't need.

Multicore GPU's will be interesting. If they do one quad core GPU for the $499 enthusiast, then all they need to do is sell upper mainstream, mainstream and entry level as triple, dual and single cores. Alternatively, the enthusiast part could be two dual core GPU's, with one mainstream dual core and a single core entry level.

I don't see any $600 parts in this list. ATI has pretty much aimed at the best performance they can tweak out of a design at what is, for Nvidia, an upper mainstream part. Nvidia will have to go multicore GPU too, but will they be hurt by not having a processor division?

When is Intel expected in the discrete GPU business?


February 9, 2008 2:46:27 AM

From everything I heard, the r700 won't be out until early 2009. Hope I am wrong!
February 9, 2008 3:00:21 AM

I'd heard the R700 in six months and R770 in early 2009 with Swift (fusion of one GPU core with three CPU) in late 2009. Maybe AMD's trying to make up for all the missed deadlines in 2008?

Waiting for news on the successor to R670/680 almost makes me want to make up names for unannounced cards like the people waiting for an Nvidia 9800gtx. How does the HD 4200 sound? It's fun to mimic the other company's past generation success.
a c 130 U Graphics card
February 9, 2008 3:37:54 PM


Well its all if this is this fast then that needs to be this at the min.
Dont know how good nordic are but i wouldnt hold my breath until some more previews etc start to surface.
Right up front i want to say that this isnt aimed at Stranger but he isnt the only one i have seen posting that seems to be having a hard time accepting the whole multi GPU thing, The X2 is reported as a transitional product so it seems that multi GPU is the logical thing they are working to.
I know its from Wicki but here is a link that would seem to sugest that they do intend to go multi core in a big way.
mactronix
http://en.wikipedia.org/wiki/R700
February 9, 2008 3:38:34 PM

Pardon my scepticism on this, but a couple things seem to be getting overlooked. This card is supposed to be "up to" 50% faster. That doesn't mean that it will be 50% faster, only that in particular cases it might be 50% faster. It also might only be 5% faster. We won't know for sure what the numbers actually are until one gets out and is benched.

The other potential problem is that we used to hear how great the Phenom was going to be, how it was supposed to be 40% faster then the previous chip. Well, we know how that turned out. So again, pardon my scepticism, but I'll believe it when I see actual benches that show it.
February 9, 2008 8:56:53 PM

Don't compare the Phenom to R770. Different divisions of the same recently merged company. ATI has delivered great products for far longer than AMD delivered good XP and X2 processors. I trust ATI right now more than AMD, even though "AMD" is on the chipsets and "ATI" is now just a marketing designation.

The stock 3870x2 performs, on average, 46% faster than a single 3870. In The Witcher, it provides 46 fps over a single 3870's 30 fps. I see no problems with R770 equaling that with one multicore GPU. I can't wait to set up the 3870x2 that's just waiting for a PSU, as the Antec Neo 550 does not have a 6+2 connector for 8 pin power.

I went to Fry's to see if there was any kind of adapter for 8 pin PCIe, but the guy tried to sell me a Xeon motherboard single molex to 8 pin adapter. LOL. I also e-mailed Antec to see if I could use one of their Neo 650 replacement 6+2's with the 550, but I'm guessing not.

Fry's wanted $249 for the Antec TPQ 850, which is too much, so I'll just wait until next Friday when I can order it for $179 from Newegg (don't know if the $50 rebate will be in effect next week but $129 would be nice). I don't mind buying the new PSU, but I would have gotten that first and the card later.

If a factory overclocked card needs the 8 pin power connected just like a home overclocked card, then the company needs to include the correct adapters or make it clear in the system specs. Spec said 550 watt PSU, the Antec meets that. They said "2 power connectors included" so I expected one 2 molex to 6 pin PCIe and one 6+2 or 8 pin adapter for the other power connector, but I got two 6 pin PCIe's instead and MSI tech support just says "take it back".

Really? I'd rather get their product to work, even if I have to spend more upgrading. They shouldn't encourage people to return products that should work with the upgrade. I'll return it to them if it's a bad card, but I don't know that yet.
a b U Graphics card
February 9, 2008 9:24:55 PM

This info ,50%, is supposed to come from manufactuers of various boards et el. One thing to remember, the shortcomings of the 600 WILL be rectified, and since the 39XX's are nothing but die shrink mostly, we still havnt seen those improvements. Also, Id like to add that once DX10.1 or DX11 is implemented, the software driven AA abilities of the ATI products will shine
February 9, 2008 10:04:01 PM

June?
If they had that 4 months ago, then hell yea, but June is too late.
February 10, 2008 2:02:42 AM

sailer said:
Pardon my scepticism on this, but a couple things seem to be getting overlooked. This card is supposed to be "up to" 50% faster. That doesn't mean that it will be 50% faster, only that in particular cases it might be 50% faster. It also might only be 5% faster. We won't know for sure what the numbers actually are until one gets out and is benched.

The other potential problem is that we used to hear how great the Phenom was going to be, how it was supposed to be 40% faster then the previous chip. Well, we know how that turned out. So again, pardon my scepticism, but I'll believe it when I see actual benches that show it.


^^Agreed, this is how things get overhyped and then people are not happy with the final product, ie. Crysis. Things don't always happen the way there supposed to, fabs get lower than expected yeilds (Phenom) and then have to be downclocked to remain stable.
February 10, 2008 2:23:08 AM

Only four more months? I better start saving up for one of these. :p 

I'm a little reluctant to upgrade my GPU again, after just getting an 8800GTS in September, I'd like to my $280 investment to last a little longer than that. :( 
a b U Graphics card
February 10, 2008 4:58:11 PM

badgtx1969 said:
Yes GPUs are highly parallel processing units, however, multiple GPU cores in the same die/package could scale much better than multiple GPU processors with the proper architecture and logic built into the GPU multi-core. This might also require less driver optimization. This is where AMD's processor division can aid in the evolution of the GPU and we all know that multi core GPUs are part of their path to 'fusion'.

Lets hope the high end RV770 (xt?) is really a 4 core offering. I think 2 cores are more likely.


Yeah that's what I've been thinking too.

http://www.tomshardware.com/forum/248085-33-crown-back-...
http://www.tomshardware.com/forum/page-248120_33_99.htm...

Multi-core with 2+ dies on a single package makes alot of sense for the high end.

I doubt it would be 4 cores to start simply because it's less practical and you lose alot of the benefits of a more modular plan of 2 cores and then 1-2 dies.

If they can get it out by June that's pretty good.
a b U Graphics card
February 10, 2008 4:59:35 PM

enewmen said:
From everything I heard, the r700 won't be out until early 2009. Hope I am wrong!


That seems to be people confusing the intergrated 700 class chip built into MoBos with the desktop card.
Everything says the R700 will come withing a similar timeframe as the RV770 it's built from.
a b U Graphics card
February 10, 2008 5:08:22 PM

sailer said:
Pardon my scepticism on this, but a couple things seem to be getting overlooked. This card is supposed to be "up to" 50% faster. That doesn't mean that it will be 50% faster, only that in particular cases it might be 50% faster. It also might only be 5% faster. We won't know for sure what the numbers actually are until one gets out and is benched.


Which is no different than every generation before it. Thy weren't globally 50% better either.
That it's up to 50% faster in the most select areas would be good because it denotes more graphic power regardless of whether it's in 1 test or not. For something this immature in the process without final clocks and no driver info, if this is just a clock-per-clock increase in the raw numbers then that's a good start.
a b U Graphics card
February 10, 2008 5:48:59 PM

am I old here or has everyone forgotten aboud the t-birds
everyone always leaves them out of AMD's Pentium beaters.
And they were a good step ahead of the P3's.
mod's mind not deleting my post this time....
a b U Graphics card
February 10, 2008 5:49:39 PM

but for good mesure if I were buying tommorow it would be intel
a b U Graphics card
February 10, 2008 6:10:24 PM

There's no deleted post in this thread, so not sure the reference.
February 11, 2008 12:28:18 AM

Makes you wonder what nVidia has in hiding, waiting to release at the perfect moment. Theres no way the 9800GX2 is all they got...theyve been sitting on the G80/G92 core for a year and a half theres no way they dont have something else nearly ready.
February 11, 2008 3:27:11 AM

I bet Nvidia has a G92 version of the G80 with full 128 SP's and 24 ROPs that their going to call the 9800GTX.
February 11, 2008 5:32:15 AM

yipsl said:
"In The Witcher, it provides 46 fps over a single 3870's 30 fps."


I'm getting 46 on single 3870 and 60 on crossfire @ 1600. all high 2X AA

is this from a benchmark somewhere? walking through Vizma sucks though it stutters regularly
a b U Graphics card
February 11, 2008 6:05:01 AM

Kari said:
http://www.nordichardware.com/news,7327.html
excerption
Quote:
The current state of RV770 of indicates that it should perform up to 50% better than RV670, which is pretty much what we're expecting (since this is the kind of leap you see with every generation...). The best thing about this is that ATI is on schedule for once and we should see the card in June, expect leaked benchmarks in May.



I wonder if they are still going to make the rumoured 4 core performance version (1 core for entry-level, 2 for mainstream) and if that 50% is core vs core kind of comparison. That would make one hellishly fast gard. (is that even a word :pt1cable:  )


first thread I started on toms forums :D  yay!


Wonder what Nvidia has in store, they havnt been sitting on there a$$, but from what im hearing it might be another FX unless they have fixed the heat issue. Any 9800GTX benchmarks around yet?

And what does stop them strapping two GPU's together with some minor crossbar linking them? Scaling performance? I wouldnt be supprised if sooner or later they make there own sub system like current desktop systems - memory controllers, physics processor, memory hub controller and cache for just the video card alone, nvidia wanted there own video socket a while ago even and hinted they wanted to make there own cpu too if i remember correctly, meanwhile ATi and AMD are going to make sence soon with there fusion, and in all this, what is Intel planning with there graphics processor??

Interesting times ahead.
February 11, 2008 7:22:28 AM

systemlord said:
I bet Nvidia has a G92 version of the G80 with full 128 SP's and 24 ROPs that their going to call the 9800GTX.


Thats pretty close to my fears, Im thinking a reworked g92 GTS with 8800GTX 384bit memory interface and 768mb ram. Maybe a few extras tacked on for marketing purposes. Maybe 10-15% or so faster than the the current ultra. I would of course love to be proven wrong. But the days of regular massive single chip performance increases seem to be numbered :( , Which is a pain as the game I play most doesnt work with crossfire or sli yet :( 
February 11, 2008 10:58:16 AM

Nah theyre not numbered. That would be going against some serious grain there. ATI dropping the ball with the r600 has slowed the pace of development, but if you think about it all we seen since the 8 series and hd2000 series are refreshes. 2 totally new architectures are coming out, and there's no reason to believe they wont both be alot better than the lines they're replacing. Plus dx10 is a new api, nvidia's and ati's forthcoming attempts should make a much much better go of running it very well. Just like the first dx9 cards weren't all that hot at dx9, but the generation after were greatly improved.
a b U Graphics card
February 11, 2008 3:09:36 PM

bfellow said:
Speaking of Nvidia, here is some unconfirmed benchmarks showing the 9600GT


What if anything does that have to do with this thread? :heink: 
February 11, 2008 3:14:17 PM

enewmen said:
From everything I heard, the r700 won't be out until early 2009. Hope I am wrong!



From what I hear it is coming out next week..... Wait... that was just the voice in my head....um.... or no.. I think I read it on the internet..... :pt1cable: 
February 12, 2008 12:44:21 PM

hok said:
I'm getting 46 on single 3870 and 60 on crossfire @ 1600. all high 2X AA

is this from a benchmark somewhere? walking through Vizma sucks though it stutters regularly


I got the numbers from Anandtech's review of the 3870x2:

http://www.anandtech.com/video/showdoc.aspx?i=3209&p=10

Though I'd thought it was actual gameplay, it's a FRAPS score of the first cutscene. The Anandtech review of The Witcher says that at least a 3850 is needed for reasonable performance, though even an 8800gtx stutters in some areas:

http://www.anandtech.com/showdoc.aspx?i=3207&p=6

When I get the PSU, I'll set everything up and then read the support forums to edit the configuration files to avoid crashes. Then I'll see how it runs at either 1024 x 768 or 1280 x 1024. I won't have a 20" monitor till the middle of March. It's funny how Anandtech considers 1680 x 1050 to be a "not too demanding" resolution. The Witcher isn't even DX10. Just wait till DX10.1 CRPGs arrive.

Will I be CPU limited with an Athlon X2 4600+?
a b U Graphics card
February 12, 2008 1:50:57 PM

i game at 1080p and i feel limited now with an fx-60 dual core. its why i am getting a qx9650. while your gaming at a low res, your prolly bottlenecking your gpu with that. you'd actually be better off at higher res's so your gpu can't spit as may frames at your cpu overloading it.
February 14, 2008 9:33:38 PM

What I find questionable is the subjective way reviewers decide that a particular CPU limits a particular card at a particular resolution. I know I'll be CPU limited for a month with a 17" CRT, but I didn't think I'd run into it with a 20" LCD next month.

One of the reviews implied that I will, because they had a Q6600 in their testing rig. What, do I have to get a 23" or 24" LCD with 1920 x 1200?

Since I'm buying next month, I could afford this ACER:

http://www.newegg.com/Product/Product.aspx?Item=N82E168...

I can't go over $399 excluding shipping. That's the price of the HDTV we were considering, but with an ATI card, this could be an HDTV until that income tax rebate in May. It will actually be higher res than the HDTV we looked at.

I'll still just have an Athlon X2 4600+ CPU until 45nm Phenoms lead to my decision whether to stay with AMD or go Intel. When will those be out? Not before early 2009, I think.



February 14, 2008 10:32:06 PM

I am running the Witcher on a MSI 939 boar with a 4800X2 , 4x512 corsair ddr and a HD2900pro 512 on vista 64 os.In the Witcher every thing is set on high except for AA which is 2X at 19x12 res.The game automaticly made those settings and I cant go any higher ?I do get some stuttering like a buffer issue,but the biggest annoyance is the load times.With the patch its not so bad now, but still annoying.
a b U Graphics card
February 15, 2008 10:22:56 PM

well general rule of thumb is this. if you game at a low res your cards performs faster than higher res. now your cpu still needs to process the rendered frames. so if your gpu does to well. it will slow your system down since your cpu can't keep up. you challange your gpu with a higher resolution...it gives a weaker cpu some breathing room to keep up. its kinda backwards but its the way things work.

now if your pondering an hdtv....if its a 1080P screen than its high res enough it will challange a single gpu easily, two is better and three is best depending on the game your playing. pount being the resolution should give your cpu some breathing room. when i tested my fx 60 i got near the same scores at both 1280x1024 (3dmark default res) as i did at 1920x1080...without any aa or ani. crank those up and scores screwed accordingly. anywho its was a great way for me to see that bottle neck first hand that reviewers talk about.
February 15, 2008 11:11:26 PM

We won't see a "multi-core" gpu in the way people are thinking.

People seem to think of current GPUs as "single core" and future ones as putting "two GPUs on a Die" as with CPUs.

In reality GPUs are already massively parallel, the 8800 could be argued to be "128 core", and the 2900, and its die shrinks, "320 core".

What we will see is either, two GPU packages on a board, or, as nobody seems to have considered, two dies in a package (as in Pentium D 900 or Core 2 Quad).

a b U Graphics card
February 15, 2008 11:33:03 PM

agreed gpu are massively parrellel to begin with on a single die. honestly i have no idea what direction multi gpu will end up taking. who knows with fusion coming, and intels larrabee? additition to the nethlam family could leave us with some radical changes. maybe there won't be mulit gpu at all...maybe we'll see radical increase increase in stream processors via a seprate socket or on board the cpu as amd and intel have suggested. i prefer the seprate socket idea amd has talked about as an expasion on fusion and socket sharing arcticture and or the use of htx slots. intels way they stick you with two upgardes at once (and maybe amd's depending on their choice of execution), which as much as i hate it, it does make a bit of sence considering how many of us do consistantly upgrade our rigs anyways...point is a lot is changing in the gpu market. be ineresting to see how it plays out...
February 16, 2008 12:28:12 AM

darkstar782 said:

What we will see is either, two GPU packages on a board, or, as nobody seems to have considered, two dies in a package (as in Pentium D 900 or Core 2 Quad).

Putting two on a die would only make sense for budget cards as it would enable the different card manufacturers to reuse an older design PCB.
For most cards it makes sense to stick a single chip per GPU on it, as current GPUS lack a chip to chip IO unit. In addition it might prove more difficult to implement a single die with a huge pinout instead of multiple chips with simpler pinouts. Heat and power requirements might become a problem too.
If they added a chip to chip IO unit and put multiple GPUs on a single die, why not design a larger GPU without those IO units? That would be more efficient. Adding a few more clusters of shaders is way easier, cheaper and more efficient.
The reason for the whole multi-gpu bonanza is, that the transistor count of graphic cards grows quite fast. Just compare the amount of transistors a GPU has to that of a CPU. That trend has been going for a while and if it continues multi-chip solutions are the only viable way out. While it may be possible to manufacture high end dual gpu solutions on a single die, the technology required would be cutting edge and quite expensive. Multi-chip is the middle road between cost efficiency and performance efficiency for companies like nvidia. They don't like the solution themself as it is always only a stop-gap before a smaller manufacturing process falls off the table of the big players.
The funny part is, they sell it as a feature.
a c 130 U Graphics card
February 16, 2008 8:39:27 AM


To atomicWAR
Im glad someone else gets the whole display size/res restriction thing i have tried to explain it before and just get shouted down because people cant/wont make the jump from something that sounds logical to someone who has experiance in the matter telling them that "yes it dosent make logical sence but it works this way" :) 
Also i dont get the reluctance of people to beleive in the whole multi-core GPU concept either.
As Slobogob said the transistor count is going through the roof and they are close to the limits of what can be done as far as shrinking the process goes.
In answer to this "If they added a chip to chip IO unit and put multiple GPUs on a single die, why not design a larger GPU without those IO units? That would be more efficient. Adding a few more clusters of shaders is way easier, cheaper and more efficient."
The reasons are almost oposit to your way of thinking. Larger GPU = more heat/power etc, going the multi gpu way they have a very much simplified process, it gives them the option of having the same core being produced over and over again and they simply put more cores on the better cards.
Yes i know its wicki but this is the best explanation of the intended path that i have seen
http://en.wikipedia.org/wiki/R700
Mactronix
a b U Graphics card
February 16, 2008 9:22:41 AM

Going to a higher res WONT get you higher numbers. It only demands more from your gpu. So your fps will lower
a c 130 U Graphics card
February 16, 2008 9:37:26 AM

JAYDEEJOHN said:
Going to a higher res WONT get you higher numbers. It only demands more from your gpu. So your fps will lower


Its not a cure all and it wont work in all instances but likewise when your gpu is more or less maxed out dropping the res wont result in a frame increase which most people seem to take as a gimmie, when your GPU has legs to spare and your CPU is straining upping the res can increase the frame count.
mactronix
a c 84 U Graphics card
February 16, 2008 10:20:57 AM

mactronix said:
when your GPU has legs to spare and your CPU is straining upping the res can increase the frame count.
mactronix
no it wont, you could get about the same if you were cpu limited in the lower res, but more likely it will decrease a little
February 16, 2008 3:12:21 PM

mactronix said:

The reasons are almost oposit to your way of thinking. Larger GPU = more heat/power etc, going the multi gpu way they have a very much simplified process, it gives them the option of having the same core being produced over and over again and they simply put more cores on the better cards.
Yes i know its wicki but this is the best explanation of the intended path that i have seen
http://en.wikipedia.org/wiki/R700
Mactronix


The problem they have with the multiple cores is the transistor overhead due to additional IO units and the scaling. If they can improve the scaling it could become a viable option. The problem is, the GPUs are already highly parallel and their architecture reflects that. Why should anyone produce a chip with 12 shaders and an IO unit and put three of them on a die if you could manufacture a chip with 36 shaders and a single IO unit for half the price?
The interchip connects will never be as fast as the communication between chip-parts on an architecture that was planned monolithic, as there is almost no need to communicate at all. Just compare the communication between the two cores of a core 2 duo and those of a smithfield Pentium D. Apart from the CPU core changes, the difference is quite obvious.
Packing as much punch as possible into a single chip is pretty much nvidias concept.
ATI/AMD has a different one since they actually plan to integrate a GPU on the CPU-die. For them it has an additional benefit of having smaller gpus that can communicate with each other and scale well. I'm willing to bet that ATI/AMD puts (relatively speaking) a lot more work into crossfire and multichip solutions than nvidia.
The multi-chip solution is, at least that is how i see it, a little of a back up strategy. As long as the manufacturing process can keep up, single chip solutions will be better. Should there be a hiccup in the process transistion though or the gpus grow even faster, the multi-chip solutions become viable.
It would be very interesting to see what would happen if by some stroke of luck TSMC would suddenly offer 32 nm to nvida/ati. Or, if the opposite happens and by some fateful accident the new processes would get delayed for 10 months or so.
February 16, 2008 3:38:46 PM

That article has alot of bad facts, a single 3870 is not on par with an 8800 GT. 3870 X2 performs well against the current Nvidia cards, in that it's performance is on par with them.

I'd just like to know which version of the RV670 the RV770 is supposed to outperform? 1 GPU or 2, because if it's one...

February 16, 2008 3:50:02 PM

Going multi gpu for AMD is a lot cheaper and some what consistent or more efficient with there r@d budget.They wont nessarilly have to spend money on hardware which would possibly cost them more money vs software.
!