I saw the link on GPU-Z and decided to give it a whirl since the CCC doesn't have Overdrive in Vista and can't see the clock speed.

As you can see the HD2900Pro is just a R600 XT(HD2900XT) under clocked and for my version is about $100 dollars less which makes it an awesome buy.

Only weird thing I can see is that it shows the clock speeds and the default clock speeds. The clock speeds read much lower than should be and I am running a 800watt Quad rail PSU. Maybe its the 2D clock as Sapphire states but still seems weird. But I was able to play Bioshock at 1280x1024 with DX10 and everything maxed out to look pretty and it runs very smooth. Well except untill my PC decides to restart which I believe is due to my mobo for some reason.

Anywho heres the image:
my.php
[/URL][/img]
 

justinmcg67

Distinguished
Sep 24, 2007
565
0
18,980
Well, I already ordered mine, it'll be here on Wednesday so I'm going to bench that thing for you all. But I did notice something I saw that GPU-Z of it, my 7900GS has more ROPs, more Pixel and Texture fillrate than it does. I find that to be very weird. here's the side-b-side comparision of them in GPU-Z.

 
That is strange especially since your pixel and texture fillrate is equal to a G80(8800 series) GPU. The ROPs on a G80 is 24 though I believe.

If someone has a HD2900Pro/XT upload the pictures of GPU-Z to compare the clock speeds. Hopefully they are the same.
 

Gravemind123

Distinguished
Aug 10, 2006
649
0
18,980
I think that the fill rate is determined only using the traditional shaders and not using the Stream Processors or something like that. GPU-Z is still new, so it hasn't had time to be perfected.
 
I would believe so too since I am sure that the HD2900Pro gets much better frame rates than a 7900GS in most games.

I know it averaged 70 FPS in HL2:Lost Coast benchmark at 1280x1024 with everything maxed and HDR enabled full where as my old x850XT would get 40-50 and couldn't support HDR.

Best way to test is take you old system and bench it in a game vs a new sytem in the same game. Then you would see the difference.

And I just got out of playing Bioshock and looked at GPU-Z and my clocks are at the normal speed(601MHz GPU/925MHz memory 1850MHz effective) and my bandwidth almost doubled. Its 118.4 GB/s as compared to 65.4 GB/s.

So far this card just purely rocks and I haven't even tried to OC it yet. I know the 512MB version can OC to the XT GPU speed of 743MHz and 900MHZ memory. I wounder what mine would do....maybe 1000MHz memory?
 

justinmcg67

Distinguished
Sep 24, 2007
565
0
18,980
Don't combine the Amperage of the 12V rails together, on the PSU it shoul have all four rails listed, than under that it will show the combines Wattage used, take that number, and divide by twelve. For example, on the new PSU I bought, 600W, with 2 +12V Rails @ 22A each, combined should be 44A on the 12V rails right? Wrong... What you do is take how much wattage they use, in my case it was 480W, and you divide that by 12, with gave me 40A. =) Hope this helped.
 
It states the Amps on my PSU thats where I got the combined Ampherege of 54. But just to make sure I will do the calculations.

The total wattage of all 4 rails is 684. 684/12v = 54A.

This is so basic. I am going for a Network Admin degree and the community college I go to requires basic to intermediate technology classes. So I know all about watts/volts/amps/resistor/capacitors. You name it I have learned and I have also done soldering.

I know its crazy that a degree for a PC tech/admin needs these classes but thats what they say.

Thanks for the input anyways. Always helpful. And I just tested my PC in a few DX10 games and without OC'ing I am able to run Bioshock at 1280x1024 maxed and its smooth as butter and Lost Planet at 1152x825(I think) maxed and its smooth too. So far I love this card and plan on getting a second one for XFire.