Sign in with
Sign up | Sign in
Your question
Closed

Radeon HD 4850 Vs. GeForce GTS 250: Non-Reference Battle

Last response: in Reviews comments
Share
April 20, 2009 7:02:23 AM

In the second picture of the 4850, the card can be seen bent due to the weight.
Score
0
April 20, 2009 7:44:24 AM

The Gigabyte would be more effective with 2 fans.
Score
0
Related resources
April 20, 2009 8:02:49 AM

rags_20In the second picture of the 4850, the card can be seen bent due to the weight.


Hi rags_20 -

Actually, the appearance of the card in that picture is caused by barrel or pincushion distortion of the lens used to take the photo. The card itself isn't bent.

/ Tuan
Score
3
April 20, 2009 11:00:51 AM

demonhorde665... try not to triple post.
looks bad... and eratic. and makes the forums/coments system
more clutered than need be.

ps. your not running the same bench markes as Toms so your not really comparable.
yes, same game and engine, but for example in crysis, the frame rates are completely different from the start, through to the snowey bit at the end.

pps. are you comparing your card to there card at the same resolution?
Score
5
April 20, 2009 11:37:34 AM

Hi,

I've been looking for a comparison like this for several weeks. Thank you although it didn't help me too much in my decision. I also missed some comments regarding the Physix, Cuda, DirectX 10 or 10.1 and Havok discussion.

I would be very happy to read a review for the Gainward HD4850 Golden Sample "Goes Like Hell" with the faster GDDR5 memory. If it then CLEARLY takes the lead over the GTS 250 and gets even closer to the HD4870 then my decision will be easy. Less heat, less consumption and almost same performance than a stock 4870. Enough for me.

btw. Resolutions I'm most interested in: 1440x900 and 1650x1080 for 20" monitor.

Thank you
Score
2
April 20, 2009 1:55:44 PM

Under the test setup section the cpu is listed as core 2 duo q6600, should it not be listed as a quad? Feel free to delete this comment if it is wrong or when you fix the erratum.
Score
-1
April 20, 2009 3:11:36 PM

Why a Q6600/750i setup? That is certainly less than ideal. A Q9550/P45 or 920/X58 would have been a better choice in my opinion (and may have exhibited a greater difference between the cards).
Score
4
April 20, 2009 3:21:37 PM

zipzoomflyhighand no the Q6600 is classified as a C2D. Its two E6600's crammed on one die.


No, its classified as a C2Q. E6600 is classified as C2D.
Score
6
April 20, 2009 3:34:44 PM

ZZFhigh,

Directly from the article on page 11:
Quote:
Game Benchmarks: Left 4 Dead
Let’s move on to a game where we can crank up the eye candy, even at 1920x1200. At maximum detail, can we see any advantage to either card?

Nothing to see here, though given the results in our original GeForce GTS 250 review, this is likely a result of our Core 2 Quad processor holding back performance.

Clearly this is not an ideal setup to eliminate the processor from affecting benchmark results of the two cards. Most games are not multithreaded, so the 2.4Ghz clock of the Q6600 will undoubtedly hold back a lot of games since they will not be able to utilize all 4 cores.

To all,

Stop triple posting!

Score
3
April 20, 2009 3:36:13 PM

Quote:
The default clock speeds for the Gigabyte GV-N250ZL-1GI are 738 MHz on the GPU, 1,836 MHz on the shaders, and 2,200 MHz on the memory. Once again, these are exactly the same as the reference GeForce GTS 250 speeds.


Later in the article you write,
Quote:
or the sake of argument, let’s say most cards can make it to 800 MHz, which is a 62 MHz overclock. So, for Gigabyte’s claim of a 10% overclocking increase, we’ll say that most GV-N250ZL-1GI cards should be able to get to at least 806.2 MHz on the GPU. Hey, let’s round it up to 807 MHz to keep things clean. Did the GV-N250ZL-1GI beat the spread? It sure did. With absolutely no modifications except to raw clock speeds, our sample GV-N250ZL-1GI made it to 815 MHz rock-solid stable. That’s a 20% increase over an "expected" overclock according to our unscientific calculation.


Your math is wrong. A claim of 20% over clock on the GV-N250ZL-1GI would equal 885.6 MHz. 10% of 738MHz = 73.8 MHz. So a 10% overclock would equal 811.8 MHz. 815 MHz is nowhere near 20%. In fact, according to your numbers, the GV-N250ZL-1GI barely lives up to its 10% minimal capability.
Score
3
April 20, 2009 4:53:09 PM

This whole article is completely invalid and the results are skewed because, as was documented on tweaktown, Catalyst 9.3 performance is much lower compared to 9.2. Catalyst 9.4 reclaims some of those performance losses, but 9.2 is still a bit better, if you compare the two analyses. Redo these tests with 9.2 drivers.
Score
1
April 20, 2009 5:07:00 PM

weakerthans4Later in the article you write,Your math is wrong. A claim of 20% over clock on the GV-N250ZL-1GI would equal 885.6 MHz. 10% of 738MHz = 73.8 MHz. So a 10% overclock would equal 811.8 MHz. 815 MHz is nowhere near 20%. In fact, according to your numbers, the GV-N250ZL-1GI barely lives up to its 10% minimal capability.


No what he is saying is this- Gigabyte claims that the extra copper in the PCB will allow for a 10%-30% further increase compared to how much a standard cards speed can be raised by overclocking. So saying that a standard card oc's to 800MHz which is a 62MHz increase, Gigabyte is claiming a 6.2 (10%) to 18.6 (30%) MHz further increase on top of that. So "technically" a 20% increase would have put it at 816.4 MHz, only 1.4MHz more than the 815MHz he acheived.
Score
1
April 20, 2009 5:12:22 PM

Time to ban DemonHorde 665, the abuse of the English language is making all dead spelling teachers spin in their graves.
Score
17
April 20, 2009 5:12:34 PM

personally I think it's the Zalman accounting for a bulk of the 20% extra and not the couple ounces of copper. That cooler rocks.
Score
3
April 20, 2009 5:44:36 PM

To the reviewer: Good article, but you forgot two things:

The GTS 250 is a 9800GTX+ is a 9800GTX is -also- an 8800GTS 512. So this...3 year old card is still running strong.

Also, Gigabyte's Ultra Durable is for two functions, overclocking and obviously, durability. Yes, it will overclock better. But it also will probably never stop functioning.

From someone who's gone through numerous motherboards and graphics cards with minimal overclocking on either, that means a lot more than performance.
Score
-5
April 20, 2009 7:06:33 PM

it is known that nvidia cards tax the cpu less. So if a title is cpu bound than the nvidia card will usually come out on top. Thats why you see them performing similarly when resolutions increase and when you move away from cpu dependency
Score
-6
April 20, 2009 8:56:18 PM

KyleSTLWhy a Q6600/750i setup? That is certainly less than ideal. A Q9550/P45 or 920/X58 would have been a better choice in my opinion (and may have exhibited a greater difference between the cards).


It's in the specs but I should have stressed the point: I overclocked the Q6600 to 2.7 GHz, it was plenty quick for these cards.
Score
0
April 20, 2009 8:59:02 PM

RamarTo the reviewer: Good article, but you forgot two things: The GTS 250 is a 9800GTX+ is a 9800GTX is -also- an 8800GTS 512.


Not exactly. The 8800 GTS at least sported diffrent clockspeeds. I also believe it was on a larger die, if memory serves.
Score
0
April 20, 2009 9:00:26 PM

tacoslaveit is known that nvidia cards tax the cpu less.


Is it? If so, please provide some proof of that statement as I haven't seen evidence of that.
Score
1
April 20, 2009 9:01:52 PM

weakerthans4Later in the article you write,Your math is wrong. A claim of 20% over clock on the GV-N250ZL-1GI would equal 885.6 MHz. 10% of 738MHz = 73.8 MHz. So a 10% overclock would equal 811.8 MHz. 815 MHz is nowhere near 20%. In fact, according to your numbers, the GV-N250ZL-1GI barely lives up to its 10% minimal capability.


You misunderstand Gigabyte's claim. As universalremonster points out, they're alaiming a 10% increase in overclocks over other GTS 250's, not claiming that all of their cards will overclock 10% over stock clocks.
Score
0
April 20, 2009 9:43:47 PM

CleeveNot exactly. The 8800 GTS at least sported diffrent clockspeeds. I also believe it was on a larger die, if memory serves.


It's the same G92 chip with the same number of processors, albeit before the die-shrink. The original 9800GTX was the same process; the GTX+ was the die-shrink. My only point was the fact that this is still basically the same chip from three generations [Around two years and four months] ago. The fact that it's running at 815mhz compared to even the 738mhz from the die shrink is impressive and makes my vanilla 9800GTX look pitiful.
Score
0
April 20, 2009 10:44:07 PM

dimaf1985This whole article is completely invalid and the results are skewed because, as was documented on tweaktown, Catalyst 9.3 performance is much lower compared to 9.2. Catalyst 9.4 reclaims some of those performance losses, but 9.2 is still a bit better, if you compare the two analyses. Redo these tests with 9.2 drivers.

And while you're at it, take them both off an nVIDIA chipset motherboard and throw them on an AMD one for comparison's sake. Certainly never strikes me as odd when you see ATI and nVIDIA cards compared on an nVIDIA motherboard that the nVIDIA card holds at least a slight advantage throughout the test.

Wanna call your tests fair, use more than one manufacturer's architecture as a testbed. Or is THG still afraid to do a true Dragon Platform series of tests?
Score
0
April 20, 2009 11:24:19 PM

Wish you had included a stock speed 4850, 4870, 250 and 260v2 for reference. My 4870 1GB is barely 20% faster than 4850 1GB.
Score
0
April 21, 2009 12:21:17 AM

RazberyBanditWanna call your tests fair, use more than one manufacturer's architecture as a testbed. Or is THG still afraid to do a true Dragon Platform series of tests?


Dude, I couldn't disagree more. For fairness, you use the SAME motherboard, you don't pick and choose platforms for each card. If you use a diffrent platform for each test, you're not comparing the graphics cards, you're benching the systems.

I can't recall seeing hard evidence that the 750i chipset slows down Radeon cards, if you have access to that please share it with me. If not, you're getting into conspiracy theory territory.

Score
0
April 21, 2009 12:57:19 AM

Pei-chenWish you had included a stock speed 4850, 4870, 250 and 260v2 for reference. My 4870 1GB is barely 20% faster than 4850 1GB.


The stock speeds of the cards *ARE* at the reference speeds. A 4870 and GTX 260 would have been nice to add, but I didn't have them onhand.
Score
0
Anonymous
April 21, 2009 2:28:34 AM

Cleeve I think Pei-chen meant to test both cards in both platforms.
Score
0
April 21, 2009 3:40:58 AM

CleeveDude, I couldn't disagree more. For fairness, you use the SAME motherboard, you don't pick and choose platforms for each card. If you use a diffrent platform for each test, you're not comparing the graphics cards, you're benching the systems.I can't recall seeing hard evidence that the 750i chipset slows down Radeon cards, if you have access to that please share it with me. If not, you're getting into conspiracy theory territory.


I know you have to use the same motherboard for both/all cards involved in the comparison... I was suggesting you use an AMD board for the test instead of the nVIDIA one, or test both cards on both boards.

The bigger issue here is it THG seems to always use either an Intel or nVIDIA chipset motherboard paired with an Intel CPU in all the testbeds. My question is why? You had the chance to test a complete Dragon Platform when it debuted, but what did you do? You stuck an nVIDIA card in it to bench the system... Now what kind of "Dragon Platform" test is that? Incomplete? Yeah, I'll say. Very incomplete indeed.

Not everyone in this world is using Intel and nVIDIA hardware, and it'd be nice to see it plain as day that the setups you use indeed do not favor one flavor or the other by including an identical test on the "other" platform option, even if for no other reason than to prove they do not. The alternative reason - give people who have the "other guy's" hardware a clearer indication of what to expect these cards to perform like in the actual system they use.
Score
-1
April 21, 2009 5:13:33 AM

hmm, i think the gts 250 cards get a lower core voltage in the bios than the 9800 GTX+, as if memory recalls when the gtx+ was release it consumed a lot more power than the 4850 but now with the gts 250 the power usage is about equal
I can hit 835-850 core (depending on benchmark, some run fine on higher clocks than others) on my gtx+ and review sites get up to 860
gigabyte should of user a higher voltage, that cooler is more than enough to handle it

and the q6600 comments-
if you're considering $140 cards like me, chances are you're not going to have the highest end cpu/motherboard combo
the q6600 is more in line with what the people who buy these budget cards use.
Score
1
April 21, 2009 1:59:36 PM

cleeve said:
It's in the specs but I should have stressed the point: I overclocked the Q6600 to 2.7 GHz, it was plenty quick for these cards.

But what about L4D, and how can you be sure a faster CPU wouldn't affect the results in less obvious benchmarks?
cleeve said:
Quote:
To the reviewer: Good article, but you forgot two things: The GTS 250 is a 9800GTX+ is a 9800GTX is -also- an 8800GTS 512.
Not exactly. The 8800 GTS at least sported diffrent clockspeeds. I also believe it was on a larger die, if memory serves.

And the obvious omission that the GTS 250 comes with 1GB standard, instead of the 512MB that was standard on all the other cards.
8800GTS -----650Mhz 65nm
9800GTX -----675Mhz 65nm
9800GTX+ ---738Mhz 55nm
GTS 250 ------738Mhz 55nm
weakerthans4 said:
Your math is wrong. A claim of 20% over clock on the GV-N250ZL-1GI would equal 885.6 MHz. 10% of 738MHz = 73.8 MHz. So a 10% overclock would equal 811.8 MHz. 815 MHz is nowhere near 20%. In fact, according to your numbers, the GV-N250ZL-1GI barely lives up to its 10% minimal capability.

Your interpretation is incorrect. He's not claiming a 20% overclock, but that his overclock is 20% better than "average"-
'Avg': 62Mhz OC (assumed)
GB card: 77Mhz OC (815-738)
77 is 24.2% higher than 62
Score
0
April 21, 2009 4:09:44 PM

RazberyBanditI know you have to use the same motherboard for both/all cards involved in the comparison... I was suggesting you use an AMD board for the test instead of the nVIDIA one...


Once again, you're assuming the results would have been diffrent on an AMD chipset. But you still haven't provided any evidence.

There's no AMD hating going on here, dude. I write the 'Best Graphics Cards for the Money' article as well, and AMD is dominating it lately. I value price/performance, not a brand flag.

Why did I use the 750i chipset? Two reasons: 1. it doesn't make any diffrene in performance, and 2. it's the fastest system I had available for testing. My fastest AMD CPU onhand is a Phenom 9550 and I didn't think that'd cut it. I'm working on getting an i7 system that can handle both Crossfire and SLI in the near future.

But all of this is kind of beside the point, because the root of the argument is that you're suggesting the results would be diffrent on a diffrent chipset, and you still haven't provided any evidence of that.

Maybe I should have done a second set of tests in an airplane because the results might have been different there, too. But without evidence I'm not going to test every possible permutation. That doesn't make sense.
Score
1
April 21, 2009 4:46:12 PM

KyleSTLBut what about L4D, and how can you be sure a faster CPU wouldn't affect the results in less obvious benchmarks?


Hey Kyle,

I suppose I couldn't be absolutely sure of that without testing every CPU available. And I never claimed that it wouldn't affect the results in less obvious benchmarks.

Having said that, a 2.7 GHz core 2 quad is no slouch. I'm satisified the results are a pretty good representation of what we can expect from these two cards.

I don't like using only the fastest CPUs available to bench stuff, either, because I'd rather err on the realistic side of the spectrum you know? Worst case scenario is an i7 owner gets much better performance. That's better than testing with an i7 and letting folks with slower CPUs assume it'll work the same on their rigs.

But yeah, like I said, I'm moving over to an i7 testbed in the near future. But it'll be a 920, not a 965. :D 

Score
1
April 21, 2009 7:52:58 PM

Hey Cleeve, I like the % score comparisons. Well done. Personally, looks like the nVidia wins "best overall" but the ATi wins on some important exceptions.

Funny, that.
Score
-1
April 21, 2009 8:38:10 PM

Cleeve,

I'm glad you've thought the process through. Being an engineer, I see so many people taking un-scientific approaches to testing and comparing and coming to bad conclusions (you are not one of them). I guess the two schools of thought are:
1) Make the rest of the platform from the absolute best parts available to eliminate any influence a slower parts might have, or
2) Make a realistic platform and see what kind of advantage/disadvantage one component might have over another
You are a great reviewer and I enjoy all your articles. Keep them coming and congrats and good luck on the new testing platform.
Score
0
April 22, 2009 2:43:03 PM

KyleSTLCleeve,I'm glad you've thought the process through.


Thanks Kyle, coming from you that means something.

It looks like option 1) is being forced down my throat as the i7 they're sending me is a 3.2 GHz part. Personally I prefer option 2), but at least it's still a valid way of going about things, just not my personal choice. I'm prefer a more grassroots "what can this realistically do for the average joe" kind of approach.
Score
-1
April 22, 2009 5:41:43 PM

to me, it is a good approach, if not best.
i don't imagine these cards on expensive i7 gaming systems unless the owner intends upgrading these midrange cards once DX11 gets popular.
Score
-1
April 23, 2009 12:13:53 PM

Could you guys do an evaluation on why the 4850 performs so poorly(as compared to the gts 250) in ur tests please?I am quite curious about it.
Score
0
April 23, 2009 12:14:52 PM

Could you guys please do an evaluation of why the 4850 performs so poorly in world in conflict?
Score
0
April 23, 2009 1:35:28 PM

Yes Nice Job 4850 :)  AMD - ATI Radeon Best Choice
Score
0
April 23, 2009 7:34:56 PM

"...the GV-N250ZL-1GI can’t handle two 30” 2650 x 1600 monitors at the same time (unless one of them has an HDMI input)"

That's not correct - HDMI is only single-link DVI-D combined with audio, so there's no way you can get a 2560 x 1600 video signal out of HDMI, even if you adapt it to DVI-D or have a 30" 2560 x 1600 LCD with an HDMI input.
Score
0
April 23, 2009 8:18:59 PM

nray"HDMI is only single-link DVI-D combined with audio, so there's no way you can get a 2560 x 1600 video signal out of HDMI...


Quite right! In fact I was surprised to see that there, the stuff in the brackets was edited in after the article left my hands. I took it out now, thanks for reporting it.
Score
0
April 24, 2009 7:22:55 AM

The test system was a joke. Core2Duo Q6600 coupled with nforce 750?!! How many people are using this crap? DDR2 800 mhz?!! Is this guy living in the jurassic age? 4 gigs of RAm + 32 bit windows? Oh God give me strength.. A test system should be as fast as it can be so that it won't bottleneck the GPUs, but the author of the article made every effort to make it bizzare..A 1000w PSU!! Do you intend to do everything you shouln't do?
Score
-2
April 24, 2009 2:37:15 PM

avatar_raqThe test system was a joke. Core2Duo Q6600 coupled with nforce 750?!! How many people are using this crap? DDR2 800 mhz?!! Is this guy living in the jurassic age? 4 gigs of RAm + 32 bit windows? Oh God give me strength..


I think you'd find that most folks are running something *less* than a Q6600 at 2.7 GHz. I also think you'll find that this is typical of the kind of system that would be paired with a 4850 or GTS 250.

I'd also like to see some evidence that 32-bit Vista will game slower than 64-bit Vista. Which will be hard, because it really isn't the case. For someone so loudly claiming a huge mistake has been made, I would have assumed you'd have some knowledge to back yourself up.

But instead of rehashing the justifications, I'll let you read through the forum comments. I think you might find they have a lot less sensationalism than yours does while managing to debate the point in an intelligent manner. That is of course assuming you're looking for an intelligent exchange, and aren't just here to blow your horn and run away.
Score
-1
April 24, 2009 4:36:17 PM

avatar_raq said:
The test system was a joke. Core2Duo Q6600 coupled with nforce 750?!! How many people are using this crap? DDR2 800 mhz?!! Is this guy living in the jurassic age? 4 gigs of RAm + 32 bit windows? Oh God give me strength.. A test system should be as fast as it can be so that it won't bottleneck the GPUs, but the author of the article made every effort to make it bizzare..A 1000w PSU!! Do you intend to do everything you shouln't do?

I'd say cleeve knows considerably more than you do when it comes to testing hardware. Check out this article and tell me that games will be significantly impacted by a 2.7Ghz C2Q instead of a Ci7.
Score
0
April 25, 2009 12:04:28 AM

CleeveI think you'd find that most folks are running something *less* than a Q6600 at 2.7 GHz.

I aplogize if I was rude but those folks will most likely not use the high resolutions you tested, they most probably have a 17" or 19" monitor with a resolution of 1280x1024 or 1440xsomething , thus this article is exactly not for them.
As for the processor I read many articles stating that the similarly priced E8400/E8500 would perform -and OC- better than the Q6600 because many games are still not multithreaded. It consumes less power as well. And I do remeber THG guys could easily squeeze 3.4 Ghs out of the Q6600 that you OCed from 2.4 to 2.7!!.Disappointment at every turn!
Besides nforce chipsets have little sales nowadays due to their sustandard OC ability and their issues..Why did't you use a P45 for example? The folks you are talking about will find so many cheap -but good- P45 mobos and your tests would've been more realistic..
And when you operate the PSU at small % of its full power you'll lose a good % of its efficiency.. I read this in a THG article (GFX power consumption) !
CleeveI'd also like to see some evidence that 32-bit Vista will game slower than 64-bit Vista. Which will be hard, because it really isn't the case.

I agree. But a writer like you should've not put 4 Gb of ram when he knows the OS won't use or see them all.. Might as well use 2 Gb only with no big difference in performance in most titles (except Crysis) and it would've been more "realistic" for the "folks" you pretend to target..
The name of this article is (Radeon HD 4850 Vs. GeForce GTS 250) and I expected you to throw a fast system to show the real difference between them, but you threw whatever you have on your workbench and even failed to do a "realistic" budget system...At least pretend using the stuff THG recommends for us!!
Anyway, thanks for the effort put into this article and for replying my posts.
Score
-1
April 25, 2009 2:11:50 AM

avatar_raqI aplogize if I was rude


Fair enough. Accepted!

avatar_raq ...but those folks will most likely not use the high resolutions you tested, they most probably have a 17" or 19" monitor with a resolution of 1280x1024 or 1440xsomething , thus this article is exactly not for them.


I have to disagree on that one. It's been my experience that folks will out-monitor their system more than they'll out-graphics card their systems. Most of my friends have larger monitors than their cards can drive to the best of their ability.


avatar_raq As for the processor ... Besides nforce chipsets ... Why did't you use a P45 for example?... And when you operate the PSU at small % of its full power you'll lose a good % of its efficiency..


Wow! I think you got the notion somehow that I was advocating building a system exactly like this; I never said that. This article has nothing to do with the system. This article is about the 4850 vs. the GTS 250, and the system is a footnote; all it has to do is perform on par with contemporary core 2 quads like the Q9400. The chipset/amount of RAM/ exact processor/PSU is pretty much irrelevant unless you can prove that it's causing a notable performance decrease compared to a P45/Q9400 or something similar. All that matters is that is performs like a contemporary midrange quad-core system. That's it. Nothing more to see here! PSU's dont provide FPS. :) 


avatar_raq A writer like you should've not put 4 Gb of ram when he knows the OS won't use or see them all.. Might as well use 2 Gb only with no big difference in performance in most titles (except Crysis) and it would've been more "realistic" for the "folks" you pretend to target..


You're forgetting a few things about 'realism' as it refers to the 'folks' 'I' 'pretend' 'to' 'target': Lots of people have 4GB of RAM. Why? Dual channel kits, lad. Noy sure if you've heard of dual channel RAM, but it's been around for a while, and it's cheap... even if the third GB is useless, the second channel provides a little performance boost. Not much mind you, but I'm not going to build a single channel system just to be diffrent from everyone. :D 


avatar_raq The name of this article is (Radeon HD 4850 Vs. GeForce GTS 250) and I expected you to throw a fast system to show the real difference between them, but you threw whatever you have on your workbench and even failed to do a "realistic" budget system...At least pretend using the stuff THG recommends for us!! Anyway, thanks for the effort put into this article and for replying my posts.


Interesting closing remarks, as it betrays your assumption that there should be a larger diffrence between the performance of these cards. However, time and experience has shown that the GTS 250 and 4850 perform very similarly on average; this article was to explore if that held true with these non reference models, and indeed it seems to.

You've complained about the system, but if that was a problem, how can you explain the diffrences? There are large variances in performance you can't blame on the system. Marked diffrences, despite the CPU, which I maintain would perform similarly to a brand-new system. And no crazy-clocked Core i7 is going to change that.

Well, it looks like we're going to have to agree to disagree. We both have our reasons that seem valid to us. And you're quite welcome for the reply, just try to keep the criticism a little more constructive. I invite criticism, it makes me examine my motives and re-evaluate my methods; but posting how everything sucks doesn't help anyone, it doesn't promote any useful discussion.

Peace out!
Score
0
April 25, 2009 1:41:13 PM

CleeveWow! I think you got the notion somehow that I was advocating building a system exactly like this; I never said that.

I konw this is not a SBM article but I used to see the THG test system to have the ideal combination of hardware or as close as possible. I was surprised by your choices .
CleeveI have to disagree on that one. It's been my experience that folks will out-monitor their system more than they'll out-graphics card their systems. Most of my friends have larger monitors ...

Well, it seems my friends are different from yours ;P
Cleeve but I'm not going to build a single channel system just to be diffrent from everyone.

Oh my! You made sure you're diffrent from everyone else by every single choice of test system's hardware! This is what I complained about in the first place, remember?..BTW there still are many dual chnnel 2 Gb kits out there, but with all honesty the price difference is small!
I admit the CPU's effect on performance is minor in many games but there are exceptions..An important exception is the Unreal engine -which I found very CPU bound and the slightest CPU OC results in comparable FPS boost- but hey you didn't include any of these games in the benchmarks..
Cleeve which I maintain would perform similarly to a brand-new system. And no crazy-clocked Core i7 is going to change that

Well the OC you applied to both graphics cards seems to have little or no effect on the games' FPS, this suggests one of two things: either Ocing the GFX cars has no real effect on gaming performance whatsoever (only in synthetics) and thus the whole article about non-reference designs and their OCing potential proves them non-special, or there is a bottleneck somewhere in your system. Or perhaps the combination of both. Which is true? I wonder.
To be 'constructive': next time please pick hardware that most of us would pick , if for no better reason, to shut the complaints up! :) 
Cheers!
Score
-1
April 25, 2009 4:50:30 PM

avatar_raqI konw this is not a SBM article but I used to see the THG test system to have the ideal combination of hardware or as close as possible.


It *is* ideal; it represents a contemporary quad-core system, and peforms like a Q9400/P45. For all your complaining about it, you still haven't explained how a 2.7 GHz Core2 Quad paired with a 750i will perform slower than a shiny-new Q9400/P45. That's because you know it won't, especially in games when comparing graphics cards. I challenge you to prove otherwise; since it's not possible I suspect you'll ignore that challenge, despite it being the crux of your argument.


avatar_raqYou made sure you're diffrent from everyone else by every single choice of test system's hardware! This is what I complained about in the first place, remember?


So your argument is that the majority of gamers are running a 64-bit OS, something faster than a 2.7 GHz Core2quad, and 4GB of RAM is unheard of? Dude, I think you are in a little denial here. But instead of throwing opinions at each other, let's look at hard evidence:

http://store.steampowered.com/hwsurvey/

32-bit Windows: 86%
64-bit Windows: 12%

Intel 2.3 to 2.69 GHz CPUs: 23%
2.7 to 2.99 GHz: 9%
3.0 to 3.2 GHz: 10%

2 CPUs: 56%
4 CPUs: 15%

RAM; 2 GB: 37%
3 GB: 24%
4 GB: 7%

So - I used the right OS by a landslide, nailed the clockspeed, and overkilled on the RAM and number of CPU cores. The choices look solid to me son, I'm glad I used quad-core instead of dual because games have become core-dependant in the last little while, and I'm pleased to have the 4 GB of RAM in there because it's not going to affect the bottom line results much at all while making the load times quicker.

In any case, this evidence alone would seem to point out that your original complaint is bogus:
"How many people are using this crap? Is this guy living in the jurassic age?"

Actually, it looks like most people are using even worse 'crap', heheh. But regardless, I'm quite satified the system did it's job about as well as possible, and the results are about as close to ideal as I could have hoped for.


avatar_raq An important exception is the Unreal engine -which I found very CPU bound and the slightest CPU OC results in comparable FPS boost- but hey you didn't include any of these games in the benchmarks.


Heheh. Seriously? The Unreal engine is a joke as far as benchmarking is concerned, unless you're looking for meaningless 150+ fps benchmarks. I'm trying to supply meaningful performance expectations, not a contest to see who can produce the highest numbers for show.


avatar_raqWell the OC you applied to both graphics cards seems to have little or no effect on the games' FPS, this suggests one of two things: either Ocing the GFX cars has no real effect on gaming performance whatsoever (only in synthetics) and thus the whole article about non-reference designs and their OCing potential proves them non-special, or there is a bottleneck somewhere in your system. Or perhaps the combination of both. Which is true? I wonder.


And as far as overclocking results, you sound like you should be knowledgable enough to know that graphics card overclocking rarely accomplishes all that much in a real-world scenario, and that's what I'm demonstrating. Sure you can get a small boost, but it's almost never going to give enough headroom to run at a higher resolution.

I guess I could have used Unreal to show that the overclock would get 220 fps instead of 180 fps for a sensational 40 fps increase! but to me, that's useless. And if you're looking for useless, trumped-up results then my reviews aren't for you, lad. :D 


avatar_raqTo be 'constructive': next time please pick hardware that most of us would pick , if for no better reason, to shut the complaints up! Cheers!


Looks like I did, according to hard evidence as discussed above! With a little overkill on CPU cores and RAM.

Shut the complaints up? How would I get feedback then? Nah, I enjoy well-thought out complaints. Like I said - it helps me examine my choices and hear what the readers want.

The knee-jerk rude comments though, shutting up those would be nice... but hey, it's the internet. Part of the package. ;) 
Score
0
April 25, 2009 7:27:07 PM

not a bad article, but could have been better. the graphs were a little bit detail packed and hard to understand. just keep the average of the high mid and lowest framerates. and have the Q6600 OCed to atleast 3ghz to keep the cards from being bottlenecked.
just a thought.
Score
0
April 25, 2009 9:34:57 PM

I include minimum framerates so you aren't misled by the average. It's quite possible to have a 60 fps average framerate which looks great, but it's not worth much if it dips to 2 fps when an enemy fires at you. That's why minimum framerates are important, if the benchmark allows us to capture them.

As for the clock speed, I'd be real surprised if there was more than a 2 fps difference between a 2.7 GHz or 3.0 GHz core 2 quad in the games we used. The video cards will be the limiting factor for the most part, not the platform... and when the platform is the limiting factor, 300 MHz on a CPU ain't going to matter squat.
Score
0
!