Halflife numbers pointless

giant47

Distinguished
Oct 29, 2003
4
0
18,510
Someone please please please explain to me the relevance of benchmark numbers revealed in September for a game which is to be release in April. A 7 month gap in a world of 6 month Card cycles. Buying a card to insure a title runs well on your machine 7 months in advance is foolish to say the least. Both ATI and Nvidia have there new lines slated for January, a full 3 months before Valve is planning to release there masterpiece.
 

cleeve

Illustrious
Probably because the HL2 was slated to be released in November, at the time.

but the most important reason is the same reason that Doom3 benches were released months ago: because people are interested in seeing them.

People are interested in getting an indication how their current video cards will perform in the future.

------------------
Radeon 9500 (hardmodded to PRO, o/c to 322/322)
AMD AthlonXP 2400+ (o/c to 2600+ with 143 fsb)
3dMark03: 4055
 

giant47

Distinguished
Oct 29, 2003
4
0
18,510
I guess what I'm really getting at here deals with Valve. When they dropped those numbers they were less then 3 weeks away from annoucing even further delays. They knew at Shader day that the game was not shipping for the holiday season. So the holiday release argument just doesn't float.

I also Don't believe for a second that the source code theft is the reason for the further delay to April. No content was stolen. So there is no reason to change the code. In Claiming that they have to rewrite half the code, Newell is claiming that the only copy of the source was in Newell's E-mail and was deleted during the theft. It's a scape goat for their continuing problems developing this game.

So far valve has spent 68 million developing HL 2. I'd say they've been hit hard and their bundle deal with ATI was a way to bring in revenue before the games release. Aiding those sales is Tom's Hardware who has been on an "Avoid NVIDIA like the PLAGUE kick ever since the benchmark announcement.

Then again I could just be horribly wrong.
 
Then again I could just be horribly wrong.
Yeah, probably.
Obviously the talk of the performance in Tomb Raider, 3Dmk03 PS2.0 and Shadermark performance didn't peak your interest either. HL2 benchies were simply another indication of the FX series having trouble with DX9 standard PS2.0 performance. Also HLSL versus CG, standardized vs card specific. Since it's one of the first benchmarks to take advantage of more DX9 aspects it's a bigger deal than how the FXs perform in Quake 3 or Commanche 4. Also I doubt we would've gotten Carmack's comment out of him about D]|[ if it weren't for these.

Buying a card to insure a title runs well on your machine months in advance is foolish to say the least
Only if that's your ONLY motivation for buying one. If you want a card that will play let's say morrowind, which your current card or integrated graphics struggles with, you'd be able to get by with just an R9000 or FX5200 pretty well and get most of the effects. However if you want to buy a card that will play the other game (or one of them) you are anticipating then you would be much better of buying a card that it more future adept than an architecture that has shown some issues in that area.

While you may not understand why these numbers were both shocking and an affirmation of what was already known, it definitely doesn't make them pointless.

I'll just leave your conspiracy theories to those willing to endure them.


- You need a licence to buy a gun, but they'll sell anyone a stamp <i>(or internet account)</i> ! <A HREF="http://www.redgreen.com" target="_new"><font color=green>RED</font color=green> <font color=red>GREEN</font color=red></A> GA to SK :evil:
 

giant47

Distinguished
Oct 29, 2003
4
0
18,510
Confirmation of what was already known? Read the articles. They are conviently dated for you. Before shader day Tom's Hardware refers to the 5900 as the fastest card on the market.

Also HLSL versus CG, standardized vs card specific. Since it's one of the first benchmarks to take advantage of more DX9 aspects it's a bigger deal than how the FXs perform in Quake 3 or Commanche 4.
You also seem to not acknowledge DX 9 games that show the FX ahead or neck in neck with Radeons. Doom 3 and X2 jump to mind. Not to mention Aquamark 3. The Tomb Raider numbers are not extreme.

So excluding Half-Life's numbers since the game is so far off and we all know by the Nvidia will have their cards running it smoother then Patrick Steward's head by then, the only case for poor DX9 performance on FX cards is...Tomb Raider AOD where ATI's Lead is about 30%?

The lead for Nvidia in Doom 3 is 30%.

And I'm not even going to go into Open GL where Nvidia beats ATI like a red headed step child.
 

cleeve

Illustrious
A couple FYI's:

1. Doom3 is not a DirectX 9 game. It is openGL.

Carmack has said that the engine does alot with lower 16 bit precision in the Nvidia specific codepath, as well.

But using the standard ARB codepath for both Ati and Nvidia cards, Ati cards are faster in Doom3.


2. As far as X2, almost every reviewer mentions that the Nvidia cards are faster by the numbers, but they experience major hiccups during the benchmark that don't reflect on the scores.

------------------
Radeon 9500 (hardmodded to PRO, o/c to 322/322)
AMD AthlonXP 2400+ (o/c to 2600+ with 143 fsb)
3dMark03: 4055
 

giant47

Distinguished
Oct 29, 2003
4
0
18,510
A couple FYI's

The X2 "Hiccups" are in Nvidia's last driver release notes. They are a bug and do not effect performance.

The 16 point precision argument always cracks me up because when Nvidia is using 16bit precision to compare to ATI's 24bit everyone cries foul play because the Radeon's are rendering higher quality. But no one has a problem saying it's fair when Nvidia is at 32 and ATI is at 24 even though the FX cards are now rendering superior images.

Let's also remember the Nvidia specific code path's mean a Mix of 32 bit and 16 bit. So that developers can decide where they require higher and lower quality. It's not a universal 16 bit lower quality way of cheating.
 

ChipDeath

Splendid
May 16, 2002
4,307
0
22,790
But no one has a problem saying it's fair when Nvidia is at 32 and ATI is at 24 even though the FX cards are now rendering superior images
I would prefer a game to be <i>playable</i> yeah? And Carmack and Newell both state it's simply not feasible to run their engines @32-bit on Nvidia H/W, as the performance is bloody awful.

Why can't you accept that the Radeon is simply a better gamer's option?

It's not a universal 16 bit lower quality way of cheating.
If it's too slow when running 32-bit, it drops into 16-bit! Fantastic! so the bits of my game that <i>would</i> look amazing, are always done in 16-bit entirely because they are more complex? And the 'easier to draw' bits, which won't benefit from being drawn with 32-bit (over 24 bit) because of their very simplicity, will use a more power-intensive rendering process than they need? I see! What a great solution.

---
<font color=red>The preceding text is assembled from information stored in an unreliable organic storage medium. As such it may be innacurate, incomplete, or completely wrong</font color=red> :wink:
 

TKS

Distinguished
Mar 4, 2003
747
0
18,980
Well consider that DX9 <b>standard</b> states that it is supposed to be 24bit or above <b>at all times</b> I'd cry foul. Of course, Nvidia realized this and asked microsoft if they could use their mixed mode bag of tricks.

Standards are in place so that no one dips below them. For instance...I have a standard that women should be 1) Hot 2) Intelligent 3) Really Hot. GeneticWeapon has no standards :p

Anyways, as I was saying, standards are in place so that everyone is on the same page. Hence ARB paths in rendering AND DX9 standards. Nvidia dips below standards more than they rise above them. It's kinda like breaking the law...ya know, these things are in place to keep order? But then you got the one bad apple of the bunch and they think the law doesn't apply to them...hence the mixed mode apparatus that is the Nvidia graphics machine. I myself want my graphics card to have awesome Image Quality <b>on every single frame at any given moment</b>. If I had a FX, that wouldn't be the case...I'd be dipping below the standard a majority of the time...that's something I don't want. I want a gfx card that complies with graphics standards...not one that bends it like beckham.

----------
<b>Got any of that beer that has candy floating in it? You know, Skittlebrau? </b> <i>Homer Simpson</i>

TKS
 

cleeve

Illustrious
FYI, you missed the point.

I was pointing out that the FX cards use 16 bit in Doom3 to show you that doom3 is not a DirectX 9 title, as you professed, but an Ogl title. Carmack has been the OpenGL crusader since it's inception, I don't think he's ever made a non opengl game since Quake.

Remember that DirectX9 = 24 bit precision.

And as far as X2, wether or not it's a bug is irrelevant. It's an invalid benchmark until it at least runs properly. The FX cards may very well have excellent framerates in X2 when the driver is fixed, but until it at least runs properly what good is using it as a benchmark?

Or to put it in simpler terms, what twit is going to buy an FX as an upgrade so his system can play X2 before being sure the thing can run it properly?

------------------
Radeon 9500 (hardmodded to PRO, o/c to 322/322)
AMD AthlonXP 2400+ (o/c to 2600+ with 143 fsb)
3dMark03: 4055
 
Confirmation of what was already known? Read the articles. They are conviently dated for you
Out of pretty much EVERYONE here I AM the one who reads pretty much all of the articles. Perhaps you should read them. Sure the REVIEWERS (especially THG) didn't focus on poor PS2.0 performance and ShaderMark results, but the reality is MOST of us here already knew it. HL2/Shader Day was simply a confirmation that that would prove an issue in ACTUAL games and not just synthetic benchmarks (the excuse of people wishing to try and promote nV).

You also seem to not acknowledge DX 9 games that show the FX ahead or neck in neck with Radeons
You show me one and then I'd agree. But I doubt you could show me one that involves PS 2.0.

Doom 3 and X2 jump to mind.
Well first DOOM ]|[ uses OpenGL for it's graphics and only DX AUDIO and other peripheral control (perhaps you should do some more DATED research); and second, that was running the nV-centric path in those benchmarks, in fact Carmack (you may have heard of him) says:
<i>"Unfortunately, it will probably be representative of most DX9 games. Doom has a custom back end that uses the lower precisions on the GF-FX, but when you run it with standard fragment programs just like ATI, it is a lot slower. The precision doesn't really matter to Doom, but that won't be a reasonable option in future games designed around DX9 level hardware as a minimum spec."</i>
But then again I guess that passed you by as well.
You can see his comment about different paths for the nV and ATI for D]|[ in <A HREF="http://www.gamersdepot.com/hardware/video_cards/ati_vs_nvidia/dx9_desktop/001.htm" target="_new">THIS article</A>, which may also help you understand a little more of the implications.

As for X2 read Lars' comment in HIS review here at THG: <i>'This demo, which includes a benchmark mode, gives you a pretty good feel for what Egosoft's next game "X2 - The Threat" will look like. <b>Although the engine doesn't use any pixel shaders</b>, the graphics are nonetheless impressive.'</i>
So there's your answer right there. Nothing to stress, therefore the FXs do as well as they do in any DX7/8 game.

Not to mention Aquamark 3.

Why mention it it's barely got DX9 components, and the FXs lose to the Radeons under normal conditions, and turn on AA/AF and the FXs get slaughtered. The only time the FX's pull a SLIGHT lead, is when the benchmark is run without AA/AF which is NOT the default. Can't play fair yet again.

The Tomb Raider numbers are not extreme.
No they aren't extreme, but neither is the level of shader effects and such. Turn on LOD and such and you'll see the FX5900U perform worse than the R9600P under the standard path, the FX's need to use either CG or run-time recompiling into CG in order to come close to the Radeons perfromance.

.. schtoopid analogy edited... the only case for poor DX9 performance on FX cards is...Tomb Raider AOD where ATI's Lead is about 30%?
That's not the only case, just the only one YOU know about.

The lead for Nvidia in Doom 3 is 30%.


According to what? Oh yeah that's right your DATED nV-centric benchmarks, yes I forgot you're not very current with your references.
Simply put, see above.

And I'm not even going to go into Open GL where...
You know even less? You ignorance above makes me think we should thank you for keeping it to one subject you know nothing about. And when it comes to even OLD OGL games there is barely a diff. between the two, and newer games, well I guess you already read what Carmack had to say about one of the most anticipated OGL titles, so I don't see a reason to recap that.
EDIT: and go look at <A HREF="http://www.hardocp.com/article.html?art=NTQwLDY=" target="_new">[H]'s Call of Duty Benchmark</A> in today FX5700 review. Guess the Radeons can handle OGL just fine! Guess nV needs to do some more optimizing for that game too.

The NEW DET 52.16 drivers help the FX's reach nearly the same framerates, but it's only once the cards are 'optimized' and even then, they don't always perform (a draw back of on the fly recompiling IMO), just look at the Max Payne 2 benchies sofar.

I think that about covers it. You go an buy/keep your FX card, no one's stopping you, they are preety good cards in many areas; just don't try and use your ignorance to try and convince others that the graphics card world is flat and all cards are created equal.

Do some more research and try to actually READ the articles, and not just look at the pretty pictures.

EDIT: I thought that covered it, but I didn't read on to your other inane rambling (or seeing others equal dismissal of your arguments) before replying to your first post, so I'll comment on this other bit of ignorance you spew;

The 16 point precision argument always cracks me up because when Nvidia is using 16bit precision to compare to ATI's 24bit everyone cries foul play because the Radeon's are rendering higher quality. But no one has a problem saying it's fair when Nvidia is at 32 and ATI is at 24 even though the FX cards are now rendering superior images.
Beyond the DX9 standard that others mentioned, here's the other reason it matters. MOST (almost everyone) can see the difference between 16bit and 24 bit precision, whereas the diff. between 32 and 24 bit precision is almost imperceptable. While that isn't the case in workstation cards doing DIFFERENT work, in most gaming situations that is the case. So ATI's solution while not the optimal image quality, is the perfect balance of IQ and speed. The interesting thing is that even when the nV's run with 32bit precision most reviewers say that the ATI's look better. So what benifit is their at all in nV's method? None that I can see. You can harp on it all you want, but unless the IQ is the same (which it isn't running 16 bit), then there's no comparison. In order to reach parity the nV's will have to run at 32 bit and then the Framerates suffer greatly, to increase FPS they use partial precision which causes alot of missing effects. So you chose which one you want, lesser framerates or lesser quality effects.


- You need a licence to buy a gun, but they'll sell anyone a stamp <i>(or internet account)</i> ! <A HREF="http://www.redgreen.com" target="_new"><font color=green>RED</font color=green> <font color=red>GREEN</font color=red></A> GA to SK :evil:
 
Just to add some other info, here's what CPU magazine wrote in their R360/9800XT review, the little box on the bottom right of pg 25;

DX 9 Pixel-Shader Capabilities

<b>ATI R360:</b> Max operations per pipe = 5 , Max. floating-point operations per clock = 40 , Max. operations per clock = 40 , Pixel shader operations fill rate = 16.5 GOps/sec

<b>nV NV35:</b> Max operations per pipe = 3 , Max. floating-point operations per clock = 8 , Max. operations per clock = 12 , Pixel shader operations fill rate = 5.4 GOps/sec

Another segment is also interesting;

<i>"The fixed-point advanatge of NVIDIA isn't really DX9 applicable in the real world. NVIDIA's precision is more DX8-like, and image quality could turn out to be a little worse that way. NVIDIA needs to change hardware here."</i>

followed by;

<i>"In the meatime, developers will actually have to do some extra ""rework-ing/optimizing"" to make games such as HL2 and Doom 3 look and run better on NVIDIA hardware."</i>

Sounds pretty much like some of what we are talking about, no? :wink:


- You need a licence to buy a gun, but they'll sell anyone a stamp <i>(or internet account)</i> ! <A HREF="http://www.redgreen.com" target="_new"><font color=green>RED</font color=green> <font color=red>GREEN</font color=red></A> GA to SK :evil:
 

Ion

Distinguished
Feb 18, 2003
379
0
18,780
Typical troll+nvidiot post by someone that can't accept their card is the inferior one on the market. :evil:

PS. why target so specifically on HL2? D3's score were released ages ago and i didn't heard you complain.
 

cleeve

Illustrious
Heheh, you're on a roll this week Ape...

------------------
Radeon 9500 (hardmodded to PRO, o/c to 322/322)
AMD AthlonXP 2400+ (o/c to 2600+ with 143 fsb)
3dMark03: 4055
 

phial

Splendid
Oct 29, 2002
6,757
0
25,780
A couple FYI's

The X2 "Hiccups" are in Nvidia's last driver release notes. They are a bug and do not effect performance.

so it isnt worknig properly. why are you defending it
The 16 point precision argument always cracks me up because when Nvidia is using 16bit precision to compare to ATI's 24bit everyone cries foul play because the Radeon's are rendering higher quality. But no one has a problem saying it's fair when Nvidia is at 32 and ATI is at 24 even though the FX cards are now rendering superior images.

its been accepted by just about everyone on every site that 32bit doesnt offer any IQ improvements over 24bit, teh way ATI does it.

actually ATI is 32bit for most of the path... theres just a few rendering stages that are 24bit...but like i said, images comparing 24 and 32bit have shown no noticeable difference in precision

also,you might as well say that GFFX's dont have 32bit because they dotn do it fast enough to be useable.. hence the 16/12bit mixed mode

loL@12bit




any real DX9 app is faster on ATI hardware... some games may claim to be DX9, but only run 1 or two PS2.0 shaders...






-------


<A HREF="http://www.albinoblacksheep.com/flash/you.html" target="_new">please dont click here! </A>
 

GeneticWeapon

Splendid
Jan 13, 2003
5,795
0
25,780
actually ATI is 32bit for most of the path... theres just a few rendering stages that are 24bit..
Wrong Phial

some games may claim to be DX9, but only run 1 or two PS2.0 shaders...
They may only use a couple ps2.0 shaders...but those two shaders are the ones used the most throughout the game. Aquamark is like that. It only has two of three DX9 shaders but they get used alot.

<b>I help because you suck</b>