Sign in with
Sign up | Sign in
Your question

4870x2 HardOCP Preview - taking the fluff out of reviewing t -.- t

Last response: in Graphics & Displays
Share
July 14, 2008 1:59:41 PM

Finally there is a review that will show what the r700 is capable of in a good light and a non cpu bottlenecked state.

http://enthusiast.hardocp.com/article.html?art=MTUzMSwx...

I direct you specifically to the 24xCSAA with ADAA enabled to show you what this card is truly capable of:

http://enthusiast.hardocp.com/article.html?art=MTUzMSw4...

Age of Conan 2560*1600, 8xADAA - 4870x2 Crossfired:

http://enthusiast.hardocp.com/image.html?image=MTIxNTk3...
a b U Graphics card
July 14, 2008 2:11:38 PM

Any X2 card, or sli cf setup with these latest cards will be cpu bottlenecked. I almost like [H] way of doing reviews, it is different, and brings a few insightful things
July 14, 2008 2:21:40 PM

hey guys, I have a hardly used 8800 gts 512 with original box for only 500.00....it was only used on sundays by a little old man to play bingo !
a b U Graphics card
July 14, 2008 2:32:16 PM

LOL, trade ya straight up for a slightly used GTS320....PLUS Ill throw in a slightly used 1900xt512mb
July 14, 2008 2:51:29 PM

i'd trade you my 9800 pro.. but i don't have another agp card for my athlonxp box >_>
July 14, 2008 2:58:48 PM

Nice performance and with pre-release drivers so it should be very interesting soon. I am running a single 8800GT @ 1920x1200 and I can play all my games maxed except Crysis and still neither of these setups even SLI and Crossfire can do that so I think I might have to wait for the next generation to upgrade.
July 14, 2008 3:07:10 PM

I have doubts that release drivers are going to catapult it any further than it is, but the performance is good to say the least. A release driver might bump up Crysis performance though, seeing as they are communicating through an onboard XF bridge.

On most dual gpu setups with mature drivers (AMD or Nvidia), the scaling for Crysis is about 50% of the framerate of a single gpu; so I would say there will be some improvement there.
July 14, 2008 3:12:58 PM

ovaltineplease said:
I have doubts that release drivers are going to catapult it any further than it is, but the performance is good to say the least. A release driver might bump up Crysis performance though, seeing as they are communicating through an onboard XF bridge.

On most dual gpu setups with mature drivers (AMD or Nvidia), the scaling for Crysis is about 50% of the framerate of a single gpu; so I would say there will be some improvement there.

I don't think release drivers will make a night and day difference but they should certainly improve the scaling in Crysis where the performance of 2 cards was actually worse.
July 14, 2008 3:20:47 PM

mathiasschnell said:
O.O

Wow... just... wow...



Yea, and you criticized me for calling a test on a 3.2 ghz cpu a fluff test

Now do you understand how much a higher clocked CPU can show the potential of a gpu?

Anyways..
July 14, 2008 3:27:19 PM

Just think. All of these benchmarks will have to be re-ran when Nehalem is out.
July 14, 2008 3:30:17 PM

Nah, by the time Nehalem is out we'll have a whole new suite of games to bench on...

Far Cry2 and Stalker : Clear Sky being the two most anticipated for next gen graphics.
July 14, 2008 3:30:19 PM

ovaltineplease said:
Yea, and you criticized me for calling a test on a 3.2 ghz cpu a fluff test

Now do you understand how much a higher clocked CPU can show the potential of a gpu?

Anyways..

It was only at 3.6 though, I bet you can spend half the money on a Q9550 and get the same performance.
a b U Graphics card
July 14, 2008 3:32:16 PM

All the cpu "experts" criticised me for saying this a few months ago, that common cpus will bottleneck this gen of gfx cards, makes me wonder if they arent all caught up in a bunch of hype. Like more cores? Whats funny is, gpus are the parallel processors heheh
July 14, 2008 3:32:40 PM

ausch30 said:
It was only at 3.6 though, I bet you can spend half the money on a Q9550 and get the same performance.



Or buy a C2D and clock it to 4.0ghz for even less for the same performance in these titles :p 

But really, there is a pretty markable difference between 3.2 and 3.6 on a quad core as far as eliminating cpu issues goes, especially in titles like Crysis and AoC that will actually use the helper threads when possible~
July 14, 2008 3:42:33 PM

ovaltineplease said:
Or buy a C2D and clock it to 4.0ghz for even less for the same performance in these titles :p 


That's why I decided to go with the E8400 when everyone was saying Q6600. I'm going to build a new system when Nehalem comes out though, I don't think I'll be able to resist.
July 14, 2008 3:45:00 PM

Me either...posting to test my new sig !

nehalem unless propus is the sh^^...but i doubt it.
July 14, 2008 3:45:16 PM

ausch30 said:
That's why I decided to go with the E8400 when everyone was saying Q6600. I'm going to build a new system when Nehalem comes out though, I don't think I'll be able to resist.



hehe, yea; I use an e8400 too, 4.05ghz

I think when Nehalem hits i'll drop in a last gen penryn quad core at that point, whatever is highest on the clock list with a locked multiplier; should be pretty cheap by then.

I think i'll wait to adopt Nehalem, prolly gonna be too expensive and I like to tweak for my dollar value :p 
July 14, 2008 3:46:40 PM

re-test ! stupid forum !
July 14, 2008 3:50:26 PM

royalcrown said:
re-test ! stupid forum !


Who me?
:pt1cable: 
July 14, 2008 3:52:23 PM

royalcrown said:
re-test ! stupid forum !

What I do is have 2 windows open, 1 with the configuration menu and 1 on a thread I've posted on and after you save your changes reload the other one and look at your post. When you change anything it changes on all the posts you've done not just from that point on.
July 14, 2008 4:09:26 PM

4870x2 is 2 put together so crossfire is really 4 right? but compared to the 280 in sli so only 2 of them? i assume this is a price thing? because can't you sli 3 of the 280's? but i guess if you can get 4870x2 in crossfire for the price of 2 280's then it is a valid comparison. for people who are trying to eek out every bit of performance it would be interesting to see a comparison of the best available. so max 280's vs max 4870's. just a thought.
July 14, 2008 4:14:04 PM

Wondering Just how fast a processor we would need in order for the cpu not to be the bottle neck in demanding titles?
July 14, 2008 4:16:25 PM

Very nice performance with Age of Conan. My 4870 at 780/1065 gets about 55fps with all shadows on high, bloom, 8xMSAA, 16AF, all settings maxed on 1650 x 1080 in that same zone (Old Tarantia), 200+ indoors :D . I cant complain one bit. Although my Q6600 MIIIIIGHT be holding it back, its OC'ed to 3.0, Im gonna push it higher once I put my AC7 on it and ditch this faulty Xigmatek. Im idling at 50c right now...

The 4870 X2 was obviously built for max settings at extremely high resolutions, something nothing else on the market can do under $1000 basically. I say it wins despite the poor drivers, mass heat and ear drum blowing noise.
July 14, 2008 4:16:59 PM

bdollar said:
4870x2 is 2 put together so crossfire is really 4 right? but compared to the 280 in sli so only 2 of them? i assume this is a price thing? because can't you sli 3 of the 280's? but i guess if you can get 4870x2 in crossfire for the price of 2 280's then it is a valid comparison. for people who are trying to eek out every bit of performance it would be interesting to see a comparison of the best available. so max 280's vs max 4870's. just a thought.

You are correct it is 2 GPU's against 4 GPU's which is part of the reason for the poor scaling. 1 4870 is about 10-15% behind a GTX280 and you see that the X2 which is 2 GPU's doesn't produce double the performance. As the number of GPU's increase the performance gained decreases so don't expect 3x GTX280 to give 3x the performance.
July 14, 2008 4:20:34 PM

right. i get that they don't scale which is really to bad. if they did we would see some crazy numbers. it just would be nice to see best available vs. best available. not that i don't want to see comparably priced items compared (assuming they are??). that is probably more valuable to the average person since you buy based on price (typically). but also keeping in mind lots of people considering buying 2 X 280's or 2 X 4870x2 could probably afford to throw another 280 into the mix if it was worth it. so a comparison would be nice.
July 14, 2008 4:31:06 PM

Considering the 4870 X2 uses about the same amount of power as one GTX 280 youre going to have to take alot more things into consideration when putting two 4870 X2's against 3 GTX 280's, like needing a new power supply more than likely, among other things.

At that point you are at 500W estimated for the ATI setup and 750W estimated for the nvidia setup. $1000 estimated for the ATI setup and $1500 estimated for the Nvidia setup, plus another $300 for a 1200W power supply while the ATI setup will only need a 800W or so which would be around $100 cheaper. So..

ATI = ~ $1200
Nvidia = ~ $1800

...with probably 5% performance difference seperating them, if that. And if you want to REALLY nicpic youre gonna need a full tower case to house 3 dual slot cards so add another $250 for a CM Stacker, oh and another $300 for a 790i Ultra board which youll need for that full speed Tri-SLi. While the two 4870 X2's would fit inside my $80 Raidmax Smilidon mid tower or any mid tower with no/a removable bottom drive cage for that matter. Also while needing a $150-$250 AMD 790FX or x38/x48 motherboard.

Total...
ATI = ~ $1400
Nvidia = ~$2400

This can go on all day as you can see. Basically the idea here is that Tri-SLI is preety much pointless and pure eye candy at least until scailing is MUCH MUCH better. Could really say the same about quad-crossfire. There is a reason Tri-SLI and Quad-Crossfire is never benchmarked, and its not financial reasons. The Dual-card setups are going to give you like 90% of the performance a 3/4 way setup would.
July 14, 2008 5:02:31 PM

ovaltineplease said:
Nah, by the time Nehalem is out we'll have a whole new suite of games to bench on...

Far Cry2 and Stalker : Clear Sky being the two most anticipated for next gen graphics.


What? No Warhammer Online: Age of Reckoning?
July 14, 2008 5:06:06 PM

spathotan said:
Im gonna push it higher once I put my AC7 on it and ditch this faulty Xigmatek. Im idling at 50c right now...


Faulty Xigmatek? Which one do you have I can most likely help you.
July 14, 2008 5:14:47 PM

The_Blood_Raven said:
What? No Warhammer Online: Age of Reckoning?



I'm not that into MMOs really, but from what I saw (last year) with WHo - the graphics didn't seem to be that groundbreaking...

Wondering Just how fast a processor we would need in order for the cpu not to be the bottle neck in demanding titles? said:
Wondering Just how fast a processor we would need in order for the cpu not to be the bottle neck in demanding titles?


In titles with next-gen graphics, the GPU will more than likely become the bottleneck again as there will be more demand on it; for instance if you take lets say a 3way SLI gtx280 or a quadfire 4870x2 and put them at maximum forcible details at 2560*1600 on a 30" display with highest possible AA - its reasonable that they could become the bottlenecking factor even in today's titles. In real world terms though even if you pressure the hell out of a gpu or pair of gpus, you still need a good processor to back them up regardless.
July 14, 2008 6:04:11 PM

Since GPUs handle (obviously) the making of each frame of the game and will soon handle physics (once drivers and developers are up to speed), that pretty much leaves non-GPU calculations, sending off the frames to the monitor and driver overhead to the CPU. So as time goes on and the GPU gets utilized more, the CPU should be less and less of the bottleneck.

Am I right in thinking that?
July 14, 2008 6:16:55 PM

^^ You are quite right, though the CPU still need to tell the GPU what to do (while not doing it itself) which is why it'll (the bottleneck) still increase but at a slower rate, though I can't be sure.

BTW wasn't there benchmarks done recently showing that AMD vs Intel processors didn't do that much of a difference in games (cept the ones that were hard core CPU intensive) I think there was a thread about it like 2 weeks ago... don't remember
July 14, 2008 6:17:26 PM

Depends if ATI wants to work with Nvidia and PhysX otherwise there will be games using Havok (CPU host physics) and games using PhysX (GPU physics). If the GPU takes over the physics chore, whats to say FPS goes down since it has to do more work, whats to say the lower cpu load will increase fps.

Don't forget, the CPU is responsible for Artificial Intelligence.
July 14, 2008 6:21:28 PM

Well AMD/ATI has both Havok and PhysX open to them, I wonder if you could have a game doing both? :p 

Also, are you sure that havok and physx are stuck as CPU and GPU implementations?
July 14, 2008 6:32:17 PM

Just foung this
http://en.wikipedia.org/wiki/Havok_%28software%29
"The company was developing a specialized kit called Havok FX that made use of the GPUs in ATI and NVIDIA videocards for physics simulations."

(I wasn't looking for it specificly, I was seeing if the two were better at one thing than the other and if they could be both used at the same time)

*Edit* Wasn't there a post just above mine a second ago?
July 14, 2008 7:33:03 PM

jonyb222 said:
^^ You are quite right, though the CPU still need to tell the GPU what to do (while not doing it itself) which is why it'll (the bottleneck) still increase but at a slower rate, though I can't be sure.

BTW wasn't there benchmarks done recently showing that AMD vs Intel processors didn't do that much of a difference in games (cept the ones that were hard core CPU intensive) I think there was a thread about it like 2 weeks ago... don't remember



This is somewhat true, its largely that it doesn't matter what kind of processor you have as long as your gpu's are the limiting factor

For instance, Crysis with 16x Quality AA on an Nvidia setup is going to be GPU bottlenecked; likewise on an AMD 24xCSAA it will be bottlenecked

However, thats about where it ends right now for ultra high end hardware. More to the point, if you have like lets say an 8800 GT single gpu card - you might see some small improvements between AMD->Intel processors running at the same clock speeds, but it won't be astronomical.

But once you start talking about dual gpu setups thats when CPU bottelneck becomes more of a priority - as typically the GPUs will be able to handle a lot between them, but they will get held back by a slow cpu - this is all app dependant of course.
July 14, 2008 7:43:12 PM

I think we need to see just how great of a bottleneck a CPU can be. Someone should compare results of the following

low-end vs high-end dual core AMD processor
low-end vs high-end quad core AMD processor
low-end vs high-end dual core Intel processor
low-end vs high-end quad core Intel processor

Give each processor from that list the following setups

dual 4870x2
GTX 280 Tri-SLI

Then we can see how much of an effect dual vs quad CPU, AMD vs Intel and low-end vs high-end will really hold back the uber setups
July 14, 2008 8:04:00 PM

Does the dx10.1 reduce the proses that GPU have to do, but the CPU vs GPU depate is very program dependable. Reach for the star use very intensive AI that reguires a lot of CPU power... the graphic can be handled by even with intel's integrated graphich... And then there is crysis...
The problem is balancing the whole thing. Many game developers are really annoyed by the difference between normal home PC and enthusiast pc. How to make a game that can run with single core celeron and "blindlindly fast IGP", and scale it for quadcore and sli or cf...

Summasummarum... the botlenecking situation is very much situation dependable.
July 14, 2008 8:43:09 PM

^^ So pretty much your beef is that we're calling it a botleneck when you think it should be called a limiting factor? (your post isn't very clear on that subject)

Isn't a bottleneck and limiting factor are the same thing, it's the component/program that is the slowest and limits the flow of information of the system. (like a bottle's neck)

I'm sorry if I misinterpreted your post but it just doesn't seem to make sense.

*Edit* " but no one will listen" :p  lol might want to take a look at your sig then
July 14, 2008 8:44:17 PM

speaking of 4870x2 vs gtx280 has anyone seen this article?

http://www.engadget.com/2008/07/14/lucid-logix-hydra-te...

What are the chances this works as advertised? And if it does and could really cater to a GPU's strengths it would be pretty cool to combine different ones?

Does anyone have more scoop on this? probably just a pipe dream.
July 14, 2008 9:05:44 PM

bdollar said:
speaking of 4870x2 vs gtx280 has anyone seen this article?

http://www.engadget.com/2008/07/14/lucid-logix-hydra-te...

What are the chances this works as advertised? And if it does and could really cater to a GPU's strengths it would be pretty cool to combine different ones?

Does anyone have more scoop on this? probably just a pipe dream.


Hmmm... it's not easy task if this is true at all...
What this hydra is pupposed to do is to split the rendered are to different craphic cards and the combine these "fragments" somehow... Not an easy tast for similar GPU's extremely difficult to different kind of GPU's...
Everybody knows that Nvidia and ATI does render the same picture a little bit different. So we would end having a pusle with parts that does not completely fit to each others... maybe with some clever blending...

I am expecting more when we see some GPU maker with solution with shared frame buffer with identical GPU cores...
July 14, 2008 9:22:20 PM

i'll cross my fingers. it sure would be nice to find a way to get better performance when combining, even if it has to be like GPU's. Right now what you get when doubling, tripling ect as talked about above is just poor. especially since you pay the full price for an additional one. (somehow i have a feeling we'll never see: "prove you have a gtx280 and the next one will only cost you 15% of retail because that is all the benefit you will get" ads) :)  :)  :) 
July 14, 2008 11:49:27 PM

ovaltineplease said:
Finally there is a review that will show what the r700 is capable of in a good light and a non cpu bottlenecked state.

http://enthusiast.hardocp.com/article.html?art=MTUzMSwx...

I direct you specifically to the 24xCSAA with ADAA enabled to show you what this card is truly capable of:

http://enthusiast.hardocp.com/article.html?art=MTUzMSw4...

Age of Conan 2560*1600, 8xADAA - 4870x2 Crossfired:

http://enthusiast.hardocp.com/image.html?image=MTIxNTk3...


The comparison is between 4870X2 vs GTX280 SLI of course 280 would win but considering you may have 4 = 4870 at the same price for 2 280GTX.

1 280GTX = 2 4850
July 15, 2008 12:05:46 AM

pogsnet said:
The comparison is between 4870X2 vs GTX280 SLI of course 280 would win but considering you may have 4 = 4870 at the same price for 2 280GTX.

1 280GTX = 2 4850



Were you trying to make a point, because i'm not understanding exactly what you were inkling at.
July 15, 2008 9:58:26 AM

pogsnet said:
The comparison is between 4870X2 vs GTX280 SLI of course 280 would win but considering you may have 4 = 4870 at the same price for 2 280GTX.

1 280GTX = 2 4850


If you say so:

http://techreport.com/articles.x/15105/3

4870X2 clearly beat 2 GTX280 in SLI in most tests here maybe except 25xx resolution and crysis whitch allways favoured NV
July 15, 2008 1:28:57 PM

Yea rodney, but thats why I don't buy into bs fluff reviews anymore because again in that techreport article they aren't pushing either card near enough.

1: they are using too low-clocked of a processor (3.0ghz) for EITHER gpu array

2: they aren't forcing high AA modes to really show what either configuration can do.

Thats why hardocp's article is superior to any of this trash, is because they are actually pushing the limits on the hardware in extremely intensive scenarios to show what they are truly capable of.

I'm honest to god really surprised people are still using blasted Half Life 2 benchmarks .. who cares. The game runs on anything you throw at it, and its very cpu limited in their testing

I didn't make a "custom timedemo" but on gtx260 SLI I can do a Lost Coast timedemo and get 156 avg fps on a e8400@4.05 ghz in 1680*1050,16xAF, 6xAA, max possible details - in their testing it shows a gtx280 SLI getting 118 fps which is a total joke and shows that the testing platform was bogus. Furthermore the 4870 CF should perform a lot higher as well at that resolution and at the higher resolutions.

This is really all aside the point - the bottom line is that the 4870x2 is largely superior to the gtx280 because the 4870x2 has better anti-aliasing hardware and while the 4870 doesn't really show this as a solo card or in crossfire, the 4870x2 has a more efficient design that allows it to really flex its anti-aliasing muscles, especially in quadfire mode - but don't use articles like techreport because it doesn't really show the true story at all and 3 year old games just don't push these cards enough.
July 15, 2008 1:45:05 PM

Quote:
hardocp, superior, hahaha.

your kidding right?

also, HL2 and source based games are very popular and many people like to know how it performs.

seriously, i wouldn't trust that site as far as i could throw them.



Fine, but you're just blinding yourself to the issue that all of these previews have - which is making cpu the limiting factor and not pushing either gpu hard enough. I really don't give a flying crap if people can get 120 fps at 4x aa 1920*1200 in Half life 2 on these gpus - why? Because you could probably get the exact same FPS with 16x quality AA on an Nvidia array - or likewise you could get the same FPS at 24xCSAA on the AMD array because both configurations are held back by the cpu on this old game.

I don't really care if you like Hardocp's articles or not - because they have given unbiased and highly positive reviews of the 4850, 4870 and now the 4870x2; so if you "don't trust them as far as you could throw them" when they have the amount of data available to back up their claims as they do - then you might as well write off Tomsadvertisingguide because THG has been doing trash reviews and has shown how they sell themselves off (ibuypower, system builder marathon, tri-sli vs quad sli)
July 15, 2008 1:46:12 PM

Does anyone use Skulltrail for benchmarking purposes? I would think that dual overclocked Q9775 processors to at least 4.0 GHz each should alleviate the whole CPU limitation thing.
a b U Graphics card
July 15, 2008 1:55:35 PM

[H] is ok, but my problem is, you have to trust them too much, other than that, theyre ok. Looking at Tech Power Up, when the 4xxx series came out, they had several res in review. BUT, at the lowest res, 12x10, they only used 2xAA, which really put these cards in a bad light, and is totally backwards in my thinking, that being, the lower the res, the higher the eye candy. @ OTP, remember this, the 3xxx series totally sucked at AA, but the x2 showed it held its own using AA. The 4xxx series shows a better ability using AA than nVidia cards in most games, thus the x2 of course will own
July 15, 2008 2:31:10 PM

AMD/ATI's bet on CFX config. for high-end is doomed now.

4 yes FOUR 4870 in 4870X CFX couldnt beat two GTX280 in SLI!

another failure design just like 38xx
!