Sign in with
Sign up | Sign in
Your question

The real deal w/HL2

Last response: in Graphics & Displays
Share
September 12, 2003 11:08:31 PM

ATI utilizes a 8x1 pixel shader path, with one path at 8 bits. Nvidia, on the other hand, uses a 4x2 path with two paths each 4 bits wide. Currently, any game using PS 2.0 with the FX cards is only accessing shaders at 4x1, due to driver and DX9b limitations (we will see DX9c soon, mark my words) and so, the DX9 games and 45:23 driver is effectively ignoring the second PS 2.0 path.

The preview 51:75 driver alleviates this problem, enabling the full second path for use in the game as much as possible before any update to DX9 is implemented to allow true dual channels as intended by its design.

We see these HL2 benchmark results now because HL2 is seriously dependant on pixel shaders in their current form and it is singly responsible for the framerate discrepancies.

The fix coming with the Det.50 should bring the numbers in line with ATI's, and additionally, the updated DX9c from Microsoft will likely make the FX cards the winner once true dual channel shaders are implemented and dual channel benefits can be accessed.

The next incarnation of DX9 should include the ability to use simultaneous wait states for PS 2.0 textures in DX9 applications. This will greatly reduce the 'problem' shown in these 'benchmarks.' The DX9 SDK was built (without any hardware available mind you) to favor one long pipe (and thus currently favor the ATI 8x1 version) since each texture has to go through a myriad of call back and wait/check states and has a definite FIFO for all textures in the pipe the nV (4x2) pipe is crippled during these operations. With the next version of DX9 you'll see the included paired texture waits in the shader process allowing the nV to actually utilize the 4x2 pipe simultaneously instead of a defined FIFO for each.

EDITED for spelling and clarity...

More about : real deal hl2

September 12, 2003 11:39:19 PM

let's just hope your right (sigh)

Proud Owner the Block Heater
120% nVidia Fanboy
PROUD OWNER OF THE GEFORCE FX 5900ULTRA <-- I wish this was me
I'd get a nVidia GeForce FX 5900Ultra... if THEY WOULD CHANGE THAT #()#@ HSF
September 12, 2003 11:51:21 PM

Thats the best piece of news i have heard today...
i was thinking of sticking my 5900 Ultra on ebay and getting some Ati action...
Its ok Nvidia, i still love you, you run 3DSMax just fine... :) 

............................................
Render times? You'll find me down the pub...
Related resources
September 13, 2003 12:33:37 AM

the thing is, nVidia still wastes all with its Quadros and nforce2/3s

Proud Owner the Block Heater
120% nVidia Fanboy
PROUD OWNER OF THE GEFORCE FX 5900ULTRA <-- I wish this was me
I'd get a nVidia GeForce FX 5900Ultra... if THEY WOULD CHANGE THAT #()#@ HSF
September 13, 2003 12:52:39 AM

Mobo-wise for the Nforce stuff of course :smile: ;-) :wink:

My OS features preemptive multitasking, a fully interactive command line, & support for 640K of RAM!
September 13, 2003 1:00:45 AM

hmm thats an interesting viewpoint or possible fact if its true.... thing is why didnt nvidia create drivers that would support dual channel in the first place. And it doesnt change the fact that the origonal direct x specifications did not list a dual channel configuration. I don't like the fact that microsoft and game companies have to cater to nvidia because they cant read specification. If they want to deviate from it then they should make drivers that can support the deviation from day 1 not 1 year later.
September 13, 2003 3:23:38 AM

You know, if nVidia gets such sweet treatment, ATi deserves it too, since they have not gotten a single damn optimization.

Wow, looks we have someone Dave will actually debate with! :eek: 

Get yer popcorns out you freaks!

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
September 13, 2003 8:42:41 AM

Quote:
hmm thats an interesting viewpoint or possible fact if its true.... thing is why didnt nvidia create drivers that would support dual channel in the first place. And it doesnt change the fact that the origonal direct x specifications did not list a dual channel configuration. I don't like the fact that microsoft and game companies have to cater to nvidia because they cant read specification. If they want to deviate from it then they should make drivers that can support the deviation from day 1 not 1 year later.

I'm not going to defend nvidia's decision to implement 4x2 archetecture over a more standard single channel mode, it seems reasonable to think that their mobo chipset driver with dual channels was probably their motivation in trying it. But with that in mind, wouldn't it be a sad state of affairs if nobody ever tried to push the envelope with technilogical advances and only stuck with the tried and true? We would still be using 486's and EGA graphics cards if that were the case.

I think nvidia's driver programmers have a chance of resolving (or at least softening the blow of) this particular shortcoming before the HL2 goes to market (or shortly thereafter). We'll see, one way or the other.
September 13, 2003 8:54:06 AM

Quote:
You know, if nVidia gets such sweet treatment, ATi deserves it too, since they have not gotten a single damn optimization.
<snip>

You know as well as I that, that statement is really not true. All hardware manufacturers get help from Microsoft when a already released and widely distributed item ends up with a glitch or bug that can be resolved with a simple driver or OS or DX update. How many games have ATI cards had problems with that weren't eventually resolved?

And this is one of the main reasons that ATI has traditionally had problems with openGL and Linux applications, because ATI had to deal with these issues without a big conglomerate corporation like Microsoft to help them out...
September 13, 2003 9:02:55 AM

Quote:
where did you get this information?

Paying real close attention to white-papers and tech notes, you can find out a lot of things. Having a few friends tell you their problems is another :) 


p.s., anandtech is a very good place to read about 'whys' and 'why nots,' and he's much more eloquent as well.
September 13, 2003 9:05:07 AM

I'm not sure if this is true or not but I'll have to see some proof. If something this big and obvious was holding nVidia back I'm sure nVidia or someone important (not you) would've stated it by now. We've already seen 50.xx beta drivers in action and performance was not increased much. I have no idea where you get 51.75 as I have heard nothing about them.
We'll see.

"Mice eat cheese." - Modest Mouse

"Every Day is the Right Day." -Pink Floyd
September 13, 2003 9:18:36 AM

Just curious are you the bladerdash from hardocp forum?

Just wondering why you show up here...maybe your posts got bashed badly there :tongue: .
September 13, 2003 9:36:49 AM

There's only one Balderdash...

I go all over the place. Sometimes I get bashed, sometimes I deserve it and sometimes I don't. My words fend for themselves. If they turn out to be right, nobody remembers where they heard it first, if they turn out to be wrong, then they are remembered! :) 
September 13, 2003 9:45:13 AM

Quote:

If something this big and obvious was holding nVidia back I'm sure nVidia or someone important (not you) would've stated it by now

I second that!

I didn't just make this shader business up. It's been out for weeks (if not months) all the way back to the rumors (on http://www.theinquirer.net) that some unidentified nvidia employee admitting to having problems with the ps 2.0 back in july. It's the same thing now.

The driver to effectively run the dual 4x2 archetecture is still giving them problems today, I'm NOT trying to pretend that it's an easy thing to fix, I'm only saying that it is fixable with the current cards.

It hasn't been a secret, just not something nvidia likes to make public either. And I'm not the first person to reveal it, that's just silly to say that I am.
September 13, 2003 10:01:13 AM

A quick look into this reveals this is the case. The new drivers will fix a problem relating to the 4x2 mode. If you read the statements by nvidea, they drop hints about this problem, but don't exactly say what the problem is.
This will increase the HL2 perf and pix2 perf in all cases, and when they have worked on it some more, it should get better....

<A HREF="http://service.futuremark.com/compare?2k1=6988331" target="_new"> 3D-2001 </A>
<A HREF="http://service.futuremark.com/compare?2k3=1284380" target="_new"> 3D-03 </A>
<font color=red> 120% overclocker </font color=red> (cheapskate)
September 13, 2003 10:11:23 AM

Thank you speeduk,

I appreciate the fact that you actually checked into it, thanks again. :D 
September 13, 2003 10:35:06 AM

"ATI utilizes a 8x1 pixel shader path, with one path at 8 bits. Nvidia, on the other hand, uses a 4x2 path with two paths each 4 bits wide. Currently, any game using PS 2.0 with the FX cards is only accessing shaders at 4x1, due to driver and DX9b limitations (we will see DX9c soon, mark my words) and so, the DX9 games and 45:23 driver is effectively ignoring the second PS 2.0 path.

The preview 51:75 driver alleviates this problem, enabling the full second path for use in the game as much as possible before any update to DX9 is implemented to allow true dual channels as intended by its design."



I sorry but, and i'm no expert in the matter, what yousaid doesn't make much sense. "dual channel... 4 bits wide".. i mean i've nerver heard of that..


Read this article about NVFX and CineFX architecture..
http://www.3dcenter.org/artikel/cinefx/index_e.php

I really don't think your comment made any sense, in regards to the architecture aspects..


And benchmarks with those newer drivers are already floating aroud, without any noteworthy increase in speed... some increase, but nothing special.
September 13, 2003 10:56:43 AM

Quote:
<snip>
Read this article about NVFX and CineFX architecture..
http://www.3dcenter.org/artikel/cinefx/index_e.php

I really don't think your comment made any sense, in regards to the architecture aspects..

And benchmarks with those newer drivers are already floating aroud, without any noteworthy increase in speed... some increase, but nothing special.

From the very nice website you linked to (i've read it before, it's well done), I will quote:

Quote:
nVidia mentions at multiple instances that they doubled the floating point performance compared to its predecessor. The "NVIDIA GEFORCE FX 5900 PRODUCT OVERVIEW" also claims that che chip is able to execute 12 instructions per clock. This leads to the conclusion that nVidia added a <b>second FPU that can perform four instructions per clock</b>. Together with the eight texture instructions, this adds up to the claimed 12 operations.

The additional FPU was most probably placed at the combiner stage. The five million additional transistors do not suffice for an FPU of this complexity. nVidia had to remove something to fit the FPU on the chip. Because the new FPU is able to handle the tasks of the integer ALUs, we can assume that those units were removed. Tests with NV35 show only minimal performance losses in PS1.1 to 1.3, so the FPU can perform almost all operations at the same rate as the integer ALUs do.

In some exceptional cases this doesn't hold true and the FPU needs more cycles for the same task. Then there is the question of which data formats the new FPU can handle. We can be sure that that the FPU is able to handle fp32 (s23e8) numbers. The mantissa of 23 bits requires the FPU to have 23 bit adders and multipliers. Extending those to 24 bits and allowing them to split yields two 12 bit adders and multipliers. Exactly what is neccessary to replace the integer ALUs of NV30. This seems more logical than replacing only one of the integer ALUs because it needs much less transistors in total.

The question whether the FPU can be split into two fp16 units will be left unanswered. Tests have not definitely shown whether performance increases can be accounted to a higher calculating power or to the smaller register usage footprint. The marketing department surely would have mentioned it if fp16 allowed for 16 instructions per clock.

Also,

Quote:
But nVidia can't expect an application to always deliver such code. At this point we can only preach that nVidia has to put instruction reordering capabilities into their drivers, but without changing the function of the code. The method the current driver uses nothing more than a stop-gap solution that is acceptable as long as the number of applications containing unfavorable shader code is small. But wiht a growing number of titles using shader technology, nVidia can't expect GeForceFX users to wait for new drivers to enjoy higher performance through replacement shaders.

And that, sir, is exactly what I was trying to say...
September 13, 2003 11:05:51 AM

Balder I think its best to let time do the talking so to speak. When HL2 is benched with the new dets and checked for cheats etc, it will show that the performance is more what we expected in the first place. But I don't expect miracles from them either. Probably 10-20% perf increase at most.

<A HREF="http://service.futuremark.com/compare?2k1=6988331" target="_new"> 3D-2001 </A>
<A HREF="http://service.futuremark.com/compare?2k3=1284380" target="_new"> 3D-03 </A>
<font color=red> 120% overclocker </font color=red> (cheapskate)
September 13, 2003 12:16:23 PM

Thanks for the reply, now i know what you were tring to say... although i still think the dual channel "stuff" you mentioned doesn't make sense..

Still, now it's more clear what you were tring to say.

I think your last quote says it all, in respect to what we can expect from new drivers. Lets just hope, for the sake of all nvidia users, they get it to work without "changing the function of the code". But i don't think the results will be amazing, because the hardware is inferior (well, different), but they can still give it a boost though.
September 13, 2003 12:20:19 PM

I promise not to use the dual channel / ps 2.0 4x2 analogy anymore, I thought it would help people visualize it better... but maybe not :) 
September 13, 2003 12:26:49 PM

Quote:
Balder I think its best to let time do the talking so to speak. When HL2 is benched with the new dets and checked for cheats etc, it will show that the performance is more what we expected in the first place. But I don't expect miracles from them either. Probably 10-20% perf increase at most.


That sounds about right. I've seen some results that show a 20% gain in ps 2.0 already, but then they didn't contribute anything to the 3dmark03 GT4 score at all ... so go figure?!?

But yes, we will have to wait and see, and that was a point I have been trying to make all along ... that the valve presentation and HL2 benchies now out will not be the last word in regards to the nvidia and ati cards currently available (not to mention the unknowns of the soon to be released cards)...
September 13, 2003 1:56:37 PM

Nice to know someone still beleives in the FX, thx dude your kind and gentle words have soothed me. I stopped makeing my pipe bomb I was going to send to Nvidia and decided to buy a new motherboard and some memory to make me feel better.

BTW I thought that was fishy as well the 4:2 aspect since in most cases the FX was 50% slower than the R3xxx series. I dont personally think my FX will surpase the R3xxx's but it sure makes me feel a lot bette with my purchase if im only 15% to 10% slower if this is all true.

-Jeremy

:evil:  <A HREF="http://service.futuremark.com/compare?2k1=6940439" target="_new">Busting Sh@t Up!!!</A> :evil: 
:evil:  <A HREF="http://service.futuremark.com/compare?2k3=1228088" target="_new">Busting More Sh@t Up!!!</A> :evil: 
September 13, 2003 2:03:52 PM

I agree! I have been really happy with my card, image quality and frame rates rule. If this shader information turns out to be true, and the 5900Ultra will have decent FPS, then i will be a very happy dude.
Still, 18090 3dMark2001SE points (with no overclocking!!! :D  ) cant be ALL cheats and optimisations, can it?!

............................................
Render times? You'll find me down the pub...
September 13, 2003 3:10:45 PM

Sorry, gona have to call a big BS on this one.

The fact that the NV3x is a 4x2 architecture is no secret, neither is the fact that it changes to an 8x1 under certain situations, and the problem with that is Nvidia themselves have stated when it changes and when it hasn't. Under the situations it changes to an 8x1, there is no use of pixel shaders at all, and the entire point of changing to an 8x1 isn't to improve shading power, its to improve max fillrate because under the situations it changes depend on that the most. This also tells us that the extra 4 pipelines do NOT contain functional shading units, because if they did, why would they not be activated when using shader ops, and why would they not be activated already, or used in OTHER situations where they would definately help. The solution you are talking about isn't about making the FX a permanent 8x1 architecture, its about making sure the shading units to become dedicated under the 4x2 architecture, by getting rid of and forcing the pipeline to wait before using the shared shading units for doing its other main work, which is texture processing.

In the article you are basing this on, they give no indication that the pipeline will change at all with the run-time shader compiling with microsofts new accessory in the 50.xx drivers, just that the compiler will make the units dedicated
September 13, 2003 3:26:29 PM

If this is the case then Nvidia lies yet again. <A HREF="http://nvidia.com/object/LO_20030509_7110.html" target="_new">Their 5900</A> product specs page states..
Quote:
Comprehensive Microsoft DirectX 9.0 (and lower) support

Ok, then comprehensive must mean only half of the power...at least until they can make vendor specific code to work well in a few games. Then everyone will believe everything is just fine for a while...then BOOM another bomb is dropped in a new game and no one with an FX will play with good FPS <i>UNTIL NVIDIA COMES OUT WITH A NEW SET OF DRIVERS</i> then...ok for a while...then BOOM, another game comes out and, WHAT? OH yeah, we have to wait for <i>MORE FX DRIVERS</i> to come out so our performance goes up yet again. Meanwhile, ATI cards will perform with no problems whatsoever. Maybe when Nvidia supposedly gets their 4X2 going they'll have to change the statement to "ultra-cromprehensive DX9.0 support." If they do that, they can claim they weren't lying again.

Listen to yourselves folks...you're grasping at straws. Nvidia has made this bed to sleep in...not you all. You are a consumer and can change your mind anytime you want to. Nvidia is going to try and persuade you not to with its 'optimizations' and low performance. If you're not careful, you'll be right in bed beside them with a big $h!t eating grin on your face and everyone will know what a friggen dumbarse you are. No offense all, but when someone lies to me, I interact with them as little as possible in the future. The same should hold true for your vid card. They'll keep feeding you stories/excuses/lies and if you keep believing them, they'll stay alive...ONLY YOU can change Nvidia...it's obvious that they can't and won't change themselves. It's time to quit sitting on the TV and watching the couch...pull your head out and use common sense. And if you don't have any...go to ebay and buys some stat! Snap out of it people...times change for the better...you want to be part of the change or left in the dust?

<font color=blue>other people's lives
seem more interesting
cuz they aint mine
</font color=blue>
<font color=green>Modest Mouse</font color=green>

TKS
September 13, 2003 3:28:33 PM

PS: I don't believe Nvidia is operating at 4X1...in fact, they operate in 8X1 at times. Plus, if you are talking about FPU's then you are NOT talking about increased framerates...you're talking about DECREASED. The more FPU's the harder the scene is to render. They'll shoot themselves in the foot if they're doing what you say they are. Show me some undoubtable proof in a link and then I'll change my tune.

<font color=blue>other people's lives
seem more interesting
cuz they aint mine
</font color=blue>
<font color=green>Modest Mouse</font color=green>

TKS
September 13, 2003 3:48:07 PM

Well in theory its 50% slower than the equivelent R3xx's in most cases running DX9.0 code. It can be feasable that this is the case in regards to the cores performance. But then we look back to DX8.0 and 8.1 stuff where its very strong in as well as OpenGL 1.3 stuff. But due to lack of information in regards to the core its very hard to say. I honestly dont know a great deal about GPU's, and their features P7's on the other hand.

But no you could be right we could be grasping at straws but we might not be either. The R3xx's have way more horse power under the hood so its a pretty well given the FX's will never be able to overtake them in DX9.0 stuff. But to flat out deny that there isnt a slight possibility that core doesnt have some more to give isnt justified. Not to say that later revisions nV36 and nV38's might be able to close the gap but again speculateing.

Whether or not hes right or wrong isnt the case here its the theory that there is more to the DX9.0 problem than poor forethought from nVidia.

-Jeremy

:evil:  <A HREF="http://service.futuremark.com/compare?2k1=6940439" target="_new">Busting Sh@t Up!!!</A> :evil: 
:evil:  <A HREF="http://service.futuremark.com/compare?2k3=1228088" target="_new">Busting More Sh@t Up!!!</A> :evil: 
September 13, 2003 4:22:36 PM

I suggest you all give TKS's arguement some consideration about every one of US can change nVidia. Don't buy their products until they prove without a shadow of a doubtthat they have changed for the better. Me? I don't plan on buyin nvidia for 2 or more generations of cards? They've just been so dishonest lately that I've completely lost any faith or hope in their line of products for the time being.

My OS features preemptive multitasking, a fully interactive command line, & support for 640K of RAM!
September 13, 2003 5:23:55 PM

For people that didnt understand the 8x1 path vs th 4x2 path
Imagine a highway 8 lanes wide 100 miles long and only 1 car per lane can travel down each lane and until each car reaches it destination no other can travel the lane.

Now imagine 2 highways 4 lanes wide 100 miles long and only 1 car per lane same principal kinda like principal of duel ddr. In writing it looks good but in reality theres not enough support to make a big difference. I have 3 machines my cards are gf4 4200 64 meg MSI, gf4 4400 128 meg VisionTek and fx5900 128 meg Asus and if Nvidia does not fix the problem with the time comes I will switch my cards for sure to ATI For now my fx runs any game I have at max res and quailty just like glass but if it stops so soon after I bought it then screw nvidia. Plus Im upgrading my 2600 to an xp64 when them come out so that should help as well.

LIFE IS LIKE A BOLT OF LIGHTNING...UNLESS YOU MAKE AN IMPACT YOU WON'T BE NOTICED
September 13, 2003 9:40:14 PM

One thing we are all forgetting though, is the promised performance boost ATi is also saying for the 3.8s.
So if the Det50s allow equal performance, the 3.8s will make 'em rule back. It probably will stop the 9600PRO from being the big value, sadly.

Spuddy, only thing I could still say without sounding disrespectful is that, as a Canadian too, who sees 700$ as insanely expensive, paying that much for a card who ends up still 10% behind, when the competition offered it for 200$ less, and that could've been spent on a new motherboard (I know how you have had only torture with mainboards lately LOL! :wink: ), isn't my kind of economic managing. Canadian pricing sucks, and our economy and revenues suck as well. So you better hope the Det 50s make the FX5900 Ultra worth that 700$ tag. Or Futureshop will be grinning a lot that night!

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
September 13, 2003 11:01:25 PM

Good car analogy but you're forgetting about depth. With Nvidia's high clock cycles, they can pack the cars together bumper to bumper...or pixels together. With ATI, they have lower clock settings (farther spaced out cars). So, here's my point on this one....more cars = accident...or in our case....more heat. Nvidia won't be able to crank the cycles up that much anymore....but ATI has room to grow. Makes you think of what would happen if you had a 9800 Pro running Nvidia clock speeds.

<font color=blue>other people's lives
seem more interesting
cuz they aint mine
</font color=blue>
<font color=green>Modest Mouse</font color=green>

TKS
September 14, 2003 2:42:43 AM

Quote:
Good car analogy but you're forgetting about depth. With Nvidia's high clock cycles, they can pack the cars together bumper to bumper...or pixels together. With ATI, they have lower clock settings (farther spaced out cars). So, here's my point on this one....more cars = accident...or in our case....more heat. Nvidia won't be able to crank the cycles up that much anymore....but ATI has room to grow. Makes you think of what would happen if you had a 9800 Pro running Nvidia clock speeds.

That though was a main part of my point. If you will allow me to continue your traffic analogy, I will explain.

nVdia needs a police department (the police officer directing traffic = the card driver) they need to be able to better direct traffic, more efficiently around bottlenecks. And, they could use some help via vehicle driver education (thus what I mean by a MS DX9 revision, something to make the traffic itself more willing to follow direction) this will help make the 'cars' more able to follow directions and avoid foul ups and traffic jams. :) 
September 14, 2003 3:06:01 AM

We should leave the cars to Kinney & Scamtron :wink:

My OS features preemptive multitasking, a fully interactive command line, & support for 640K of RAM!
a b U Graphics card
September 14, 2003 5:03:28 AM

Not the same deal, this isn't an issue with the game, but with DX9. The only similar circumstance with ATI I can think of was where the original Radeon was made to be DX8.0 compliant, but then MS changed everything with DX8.0b, after the chips were already in production!

<font color=blue>Only a place as big as the internet could be home to a hero as big as Crashman!</font color=blue>
<font color=red>Only a place as big as the internet could be home to an ego as large as Crashman's!</font color=red>
September 14, 2003 5:20:16 AM

Quote:
original Radeon was made to be DX8.0 compliant,

Okay Crash, you've lost me this time - I'm confused as a duck. I thought the original Radeon was made to be DirectX 7 compliant. Are you sure your not talking about the R200?

My OS features preemptive multitasking, a fully interactive command line, & support for 640K of RAM!
September 14, 2003 6:35:23 AM

Eden, why would the 3.8s stop the 9600 Pro from "being the big value"?

Are they meant to increase 9700/9800 scores mainly?

Either way can't wait, I hope they live up to their expectations, unlike the 3.4s!

"Mice eat cheese." - Modest Mouse

"Every Day is the Right Day." -Pink Floyd
September 14, 2003 6:46:45 AM

I was saying that if the 5900 benefits greatly, and overtrumps the 9600PRO, the initial value it had goes down. It can't stay the same as it is now, which is already humongously high.

--
<A HREF="http://www.lochel.com/THGC/album.html" target="_new"><font color=blue><b>Are you ugly and looking into showing your mug? Then the THGC Album is the right place for you!</b></font color=blue></A>
a b U Graphics card
September 14, 2003 7:20:23 AM

My Radeon supports 2 DX8 features not present in DX7. I can't remember what those features are. MS added features to the specification too late for ATI to make design changes.

<font color=blue>Only a place as big as the internet could be home to a hero as big as Crashman!</font color=blue>
<font color=red>Only a place as big as the internet could be home to an ego as large as Crashman's!</font color=red>
September 14, 2003 7:54:09 AM

So the Radeons theoretically were better technology than the GeForce2 and had partial DX8 compliance. Kindof how the Parhelia is with DX9?

My OS features preemptive multitasking, a fully interactive command line, & support for 640K of RAM!
a b U Graphics card
September 14, 2003 10:05:44 PM

Don't know much about the Paralia, but I think you're on the right track. I sold my GTS and kept my Radeon because my Radeon performed better in some DX8 games/applications.

<font color=blue>Only a place as big as the internet could be home to a hero as big as Crashman!</font color=blue>
<font color=red>Only a place as big as the internet could be home to an ego as large as Crashman's!</font color=red>
September 15, 2003 5:13:37 AM

Quote:
So the Radeons theoretically were better technology than the GeForce2 and had partial DX8 compliance. Kindof how the Parhelia is with DX9?



yes, the first radeon had hardware support for simple pixel and vertex shaders... dx8 compliant, but M$ released 8.1 instead

the first radeon was a work of art, supported many bump mapping functions that the GF2 didnt...


if only ATI had of worked out those driver problem. the radeon series would have been seen for what it really was far sooner than now. same goes for teh 8500, which was plagued by driver problems at its release, but had much more raw power and features compared to teh GF3

-------

<A HREF="http://www.teenirc.net/chat/tomshardware.html" target="_new"> come to THGC chat~ NOW. dont make me get up ffs @#@$</A>
!