Sign in with
Sign up | Sign in
Your question

Radeon 3800 series pixelized.

Last response: in Graphics & Displays
Share
October 21, 2007 8:05:39 PM

Radeon 3800 series



Article from Fudzilla


"Radeon HD 3870 and HD 3850



According to the post at Tomshardware.com rumors about RV670 being a Radeon HD 3000 series is correct. Tomshardware even posted a picture and other various web sites reported specifications about the upcoming Radeon HD 3850 and HD 3870 cards, based on RV670Pro and RV670XT GPU's.

According to the info RV670, or HD 3850 will end up clocked at 700MHz for the GPU and 1800MHz for memory. It will support DirectX 10.1 and PCI Express 2.0. High performance HD 3870 based on the RV670XT will end up clocked at 825MHz for the GPU and 2400MHz for the memory. According to the picture, HD 3870 will need a dual slot cooling solution. Same as the HD 3850, it supports DirectX 10.1 and PCI-Express 2.0. Both GPU's are made in 55nm manufacturing process and comes with UVD. "





http://www.fudzilla.com/index.php?option=com_content&ta...

October 21, 2007 8:07:45 PM

Hopefully manufactorers will come out with a singleslot RV670XT
October 21, 2007 8:13:41 PM

There is a single slot version of the 670xt, its called the 670pro.

Its just downclocked.
Related resources
October 21, 2007 8:21:19 PM

I know that. I was talking about the RV670XT

This card better beat the 8800gt. :non: 
October 21, 2007 10:42:24 PM

Well looking at that I must say I had much higher hopes. 256-bit memory, that's uh, not to good. Only things I like is the Quad GPU option, the physics processing capabilities, and, possibly, the die shrink.

This card, to me, seems more of a target to the HTPC market than anything else. Similar to the HD2400 and HD2600 series of cards. I had higher hopes.

EDIT: I really hope they use the darn GDDR4 as the standard for these cards as well.
October 21, 2007 11:25:11 PM

I just bought an hd2900 so if this makes the price drop then crossfire will be mine!
October 21, 2007 11:35:28 PM

The XT will use gddr4 at 2400mhz.

Seriously though.. will 256bit limit it? 512 didn't do it any good. Bet it cost a fortune over 256 aswell...

In return for what will probably be a TINY % decrease at high res that most of us wont notice, it comes like 10 or 20quid cheaper, which is great.



October 21, 2007 11:38:06 PM

256-bit vs 512-bit is big. It scales with memory and the type, but assumign all things add up, it can make a noticeable difference. The problem is that NVIDIA and ATI aren't really using the full potential of memory bandwidth and GDDR4. Which is very, very sad.
October 22, 2007 12:12:14 AM

There won't be too much of a difference b/w 256 bit and 512 bit because this is a PCI express 2.0 card which means 2X faster data transport.
Also the 256 bit is a good strategy since having a 512 bit on a 12 pcb costs way more than the 256 bit. Hence one of the reasons why the 2900xt is $400 and this card will be around 200 - 250.
UVD, DX10.1, SH4.1 will help.
October 22, 2007 12:55:51 AM

aznstriker92, I'm 99% sure your wrong, 90% of us (if not more) will be using these cards on PCIe 1.0, therefor if your right, then 90% of the people that buy the thing will likely have a bottleneck, thus shrinking their possible market for the card. Therefor cutting into their earnings.
On another note, I'm pretty sure the bus bit, has absolutely nothing to do with how fast it can transfer information to the FSB from the GPU. If I have the right understanding regarding how the bus works......don't know how to put this into words so....
Image the data is a water stream, it is very wide, lets say 100 feet, then it shrinks to 10 feet, just to enlarge right back to 100 feet. Were the stream is 10 ft wide there wouldn't be as much water getting through (the current in consistant all the way through), thus preventing water from getting from A to B.
Basically what I'm saying is the even though PCIe 2.0 has a greater badwidth, it will not speed the 256-bit interface up to 512bit speeds.
Meaning it will be saturated and hinder performance.

At this point I'm thinking this is all BS and based on prvious rumours all put into one big one, with modified pictures. While all the money is in the mid range, nothing can define the mid range if there isn't any upper range cards, meaning graphics card will be put to a stand still.
Something doesn't seem right, and I think this is as BS, again.
Can't wait for the next over hyped card.
James
October 22, 2007 1:24:58 AM

Keep in mind this is a the rv670 and not 600. The reason it probably has a 512 bit bus is they improved the faults in the r600 architecture. Don't get me wrong...The r600 is a great card. But it does have problems. The only reason I can see they went to a smaller bus is because they have solved most the issues. Heck...The 8800 GTX has a 384 bit bus. The bus speed doesn't mean everything.....I'm curious how the r680 will be...That will definitely be intereting in q1 2008.
October 22, 2007 1:55:00 AM

Well James I understand your explantion but I meant that even though the 512 bit is faster than the 256 bit, The PCI express 2.0 will help speed things up. Not necessarily to the 512 bit speed.
Why is this bs? What do you mean that there arent any upper range cards? There are plenty like the GTX and Ultra. If this card is bs, then is the 8800gt bs?

How could I be 99% wrong? Im just listing information from other sources. PCI express 2.0 is twice the bandwith of PCI express 1.0.
October 22, 2007 2:23:30 AM

You information is coming from Inquirer and Fudzilla, rumors are exactly that rumors. Remember when we heard the HD2900XT was a GTX killer, after that you should have all learnt you lesson, it's not really worth speculating till the thing is released.
Who knows the 8800GT might be BS as well, but considering we have already need a screen shot from a manufacture of the card, it's probably true.
We only have high end offering from one company we need it from both, we all know this.
If they are midrange cards then yes the 256bit interface won't be a bottleneck, if it's higher end then it could be, but it'd be hard for us to verify.
No it doesn't mean everything, infact the 512bit is more then likely overkill and just another marketing tool, but if it's an upper range card say around GTX performance (I'm not saying that it will be), but if it is, then 256bit could be a potential bottleneck and a 384bit(or greater?) interface may be needed.
Again the bandwidth of PCIe 2.0 or 1.0 is irrelevant to the 256bit bus.
I personally cannot see faults causing a need of a 512bit bus.
James
October 22, 2007 2:51:11 AM

When I say PCI E 2.0 is 2x 1.0, I mean that it CAN compensate for the performance loss b/w the 256 bit and 512 bit. Im not talking about how relavent they are to each other.
My information can from secondary sources (fudzilla) but they got it from Tomshardware. If you look at websites such as gecube, they already have listed the 3800 in their products list.
Screenshot of the 8800GT dosent really mean we have the exact specs. Unless Nvidia officially released them.
Even if we speculated that the 2900XT was a GTX killer, the price difference that was released a bit earlier confirmed that it wasn't.

These cards right now are know as Performance cards, not mainstream or midrange.
Some benchmarks show that the 8800GT and RV670 are as powerful as the 8800GTS 640 and 2900XT
October 22, 2007 3:12:23 AM

-But it can't compensate, thats what I'm saying.
-Never said a picture proves specs.
-It was still speculated as a GTX killer, giving just one example of many that rumors are not always true and the GTX killer rumors were released long before the posting of the price. AMD themselves wouldn't have known what to price it till they re-evaluated the market prior to launch.
-The 256bit bus is telling me that this cards are no performance and will not beat the GTX, but it also depends what the term performance means to you. It may mean something totally different from what I view as performance cards.
James
October 22, 2007 3:14:49 AM

While it is true that PCI-Express 2.0 runs at twice the data transfer rates as PCI-Express; it can be noted that when comparing cards in AGP and PCI-Express bus types, that there was little to no difference in transfer rates. I do recall a Toms Hardware article some time ago on this. The difference between an X800 series card they tested I think at the time showed almost little difference between 2/x/4x/8x AGP and 16x PCI-Express buses.

With that said I don't think PCI-Express 2.0 is going to be that significant in closing the gap between 256 bit and 512 bit memory bandwidths. The change will come rather in a new architecture, not a new bus. At least this is my opinion, which is based on articles about buses and bandwidth.
October 22, 2007 3:16:23 AM

Completely agreed which is what I'm trying to get across,
James
October 22, 2007 3:20:09 AM

RV670 to get a name at 11th hour
Marchitecture Wars 007 RV670 late decision to decide R700 fate (name)

By Theo Valich: Thursday, 18 October 2007, 2:35 AM


IT SEEMS THAT AMD is just about to overtake Nvidia in the battle of higher numbers. Since Nvidia is so high on extending the life-line of 8800 brand with the G92_200 series being called 8800GT, AMD saw the golden opportunity.

With G92 supporting DirectX 10.0 and RV670 supporting DirectX 10.1 API, marketing war was set to be quite interesting. From one side, calling a mainstream part that can beat the high-end part 8800GT instead of 8900GT was a safe call for Nvidia, and riding the wave of brilliant success what 8800 is - but it seems that people like Pat, Captain Hook, Jon, and Ian are pulling things in a different direction.

The RV670 is more than a die-shrink of R600. It fixes a lot of inefficiency issues that ATI faced with a long-delayed child named R600, and now with 55nm process, there was enough room on the die to go large, both with precision of units, data formats, cache sizes and of course, API support.

Not a lot of people know that main target of RV670 is to establish CTM as a viable alternative to Nvidia's Tesla, thus GPGPU and professional 3D were very high on priority list. We already know that R600 variants in FireGL versions are demolishing Quadros (for the very first time in history of professional 3D, ATI has a real contender), so FireGL and FireStream guys are awaiting their RV670 chips with great expectations.

Radeon HD3700/3800 gets ready for a launch...

So, what to do with a product that has a huge challenge instead? Not burn it with a brand name that is somewhat tamed, and that was Radeon HD 2900 series. 2950 was a stillborn from day one, and now the marketing team is deciding between Radeon HD 3600, 3700, 3800. Taiwan just got the nod about HD3000 series, and we're just about to see the new chapter in the whole Marchitecture wars.

Greet Radeon HD3000 PCIe series with its member HD3800... or is HD3800 another deliberately leaked name in order to get leaky suspects?

The name is not decided yet, and don't expect it to be announced to partners up till the point of printing retail boxes, which is still some time ahead (but not a whole lot time left).

The move to HD3000 has to leave enough room for upcoming Q1'08 monster called R680 and of course, the mega-daddy MCM chippey named R700. R680 will be branded as Radeon HD3800 or HD3900, thus leaving very little amount of marketing space for the R700.

Realistically speaking, only logic for AMD would be to brand the RV670 " Radeon HD 3700", since this would leave enough room for R680, R700 and of course, R(V)710 and R(V)730, the value variants (they would probably take the usual x400 and x600).

Unless of course, HD3000 series is the final "HDsomething" coming from AMD, with completely new branding that may or may not wait in the halls of Austin and Markham. Radeon 700HD just may not seem all that far fetched, just take a look at the world of AMD chipsets.

One thing is certain: when it comes to number of sudden turns and unexpected situations, Mexican soap operas might want to take a page from the AMD/ATI/Nvidia book. ยต
October 22, 2007 9:35:31 PM

james_8970 said:
-But it can't compensate, thats what I'm saying.
-Never said a picture proves specs.
-It was still speculated as a GTX killer, giving just one example of many that rumors are not always true and the GTX killer rumors were released long before the posting of the price. AMD themselves wouldn't have known what to price it till they re-evaluated the market prior to launch.
-The 256bit bus is telling me that this cards are no performance and will not beat the GTX, but it also depends what the term performance means to you. It may mean something totally different from what I view as performance cards.
James

Of course this card won't beat the GTX. Its designed to be a performance card not enthusiast. I see performance cards as cards that play the newest games well without breaking your bank. I see Enthusiast cards as cards that bring the best money can buy and mainstream as casual gaming and multimedia. Im sure that even though these cards are 256bit bus they will still have performance as you and somebody else said earlier that its the architechure that speeds up things. Some benchmarks that I've seen say that the 8800gt/RV670 is around the 2900XT and 8800GTS remember. So these cards have to have performance or few people will buy them.
October 22, 2007 9:51:41 PM

2400/2500/2600/8400/8500/8600's don't really and people still buy them :p 
If the price is right people will buy anything.
Again like I said, it all depends on what you view as performance.
With this all being said, did Intel announce in the spring that they were going to release some high end discrete cards for the 2007 christmas season. Pretty sure I remember reading that on toms a while back.
James
October 22, 2007 9:58:42 PM

Ok James I am going to tell you what my definition of performance is.
"Performance" - The ability to play 3D accelerated games WELL.
Yes people buy the mainstream products because most of the time, they dont even have a clue on how good they are. And lots of people buy from big companys like Dell who don't inform you about computer parts at all, they just overprice them. But I understand what you mean.
Yes Intel said that they would release cards in Q1 of 2008. I dont think it was 2007 or there would have been many articles by now about them.
Personally I don't think they will be able to challenge the cards AMD and Nvidia has right now but they may be ok. They have sucked really bad in the past so......
October 22, 2007 10:17:38 PM

With Intels income, I'm sure they can come up with something good, though I won't be a first timer with their cards as I'm like you, wearvy of bugs they might encounter. But I think they should be a decent contender, at least I hope, so that video cards can improve at a quick rate, 8800's being the king for a year without a refresh is just rediculas.
Sigh, never realized it was Q1 of 2008 I was hoping the beginning of Q4 2007.
James
a c 171 U Graphics card
October 22, 2007 10:55:12 PM

aznstriker92 said:
There won't be too much of a difference b/w 256 bit and 512 bit because this is a PCI express 2.0 card which means 2X faster data transport.


LOL, yea right. All PCIe 2 will do is give you the ABILITY to move more data. You'll still need a fast enough CPU to generate the data. If you have an Athlon 4800+ x2 and move it from your old PCIe 1 board to your new PCIe 2 board, you shouldn't expect your scores to go up through the roof. Its just like with SATA, running SATA 300 instead of SATA 150 doesn't give you faster harddrive transfers. The harddrives themselves are still limited to ~60-85MBs depending on model. It doesn't matter if you hook it up to a ATA100, SATA 150, SATA 300, or even a mythical GB bus, you will still only be moving 60-80MBs. (whatever the drive mechanics can actually output.)

On the same note, double the interface to the card doesn't mean the card will run faster. So you now have twice as much data/water hitting the card. Having more data/water hit something isn't going to help at all if the card/pipes have a narrow spot. Go to the extreme, do you think moving to PCIe 2 will help a 64bit wide memory bus card? If the card lacks the ability to handle PCIe 1 because of its slow memory bandwidth, doubling it isn't going to help. Moving to PCIe 2 at the moment is mostly marketing, just like with SATA 300. As quad cores become faster and more powerful, you will then need the bandwidth. (or as you start to use 3 card setups like CF with Physics or quad SLI/CF) Double the interface to the card isn't going to help the card run any faster, unless the card is so fast that it would exceed the older interface. (I don't think either of these card have the problem, or else the GTX/Ultra would have it to.)

Don't forget that the "bitness" of the card is only one factor in the equation. Memory speed also plays in important part. The 6800GT has a 256bit bus, but slower clocked memory then the 7600GT. The 7600GT has a 128bit bus, but can match or beat the 6800GT. 1GHz of memory on a 128bit bus = 500MHz of memory on a 256bit bus. They both have the same memory bandwidth. Who cares if these cards have "only" a 256bit bus? If they can use a thinner PCB (allowing for a cheaper card.) and use faster clocked GDDR4, then I say thanks for the cheaper card.
October 22, 2007 11:02:45 PM

Ehh ok what I said earlier I take that back but the architecture of the card is what really speeds up the performance. The 6800GT and the 7600GT is a totaly different Generation so you should see some improvement with lower specs.
a c 171 U Graphics card
October 22, 2007 11:44:30 PM

Uhhhmmm, why? The 7600GS is also from the newer generation, and it isn't faster then the 6800GT. Neither is the 8500GT, which is two generations newer. The 8600GT is newer then the 7600GT, and they perform about the same. Which generation a card is in has little to do with its performance.

The 6 and 7 series are very similar. The 6600GT and the 7600GT even share the same pinout. If you have a board that runs a 6600GT, in theory you can remove it and run a 7600GT instead. The seven series is a die shrink and faster speeds of the six series. The example using the 6800GT and 7600GT is the one I commonly use to show people the bitness doesn't really matter, its only part factor in the equation. (just like memory bandwidth is only one factor in the equation of what card is faster.) You shouldn't slap a 256bit memory interface with fast memory on the 7300GS, the chip is so weak that it can barely handle the 128bit interface that it is sometimes given.
a c 171 U Graphics card
October 23, 2007 12:21:00 AM

They do share the same pinout.

http://www.dailytech.com/article.aspx?newsid=908

Quote:
Earlier today we reported about NVIDIA's new GeForce 7900GT -- the 90nm die shrink from GeForce 7800GT...NVIDIA deserves some well deserved credit for the foresight to keep GeForce 6600GT and 7600GT pin compatible.


I'm sure Nvidia did more then just shrink the die and bump up the clock speed, but the 6600 and 7600GT's similar. (similar, not exact.)

Look back at what I wrote, I never said the 7600GS was faster then the 6800GT, I said the 7600GT was. Striker was the one who said newer generation cards are faster then older ones. I said if that were true, then the 7600GS would be faster, as would the 8500GT, neither of which is.

While I don't agree with your first two points, the other two are true. The GS is a GT core, just a little slower. Last gen high end does tend to be as fast as the new gen mid range, for the most part. The GF8 series isn't really true for this, and the last gen high end cards (7950GT, 7950GX2, or 7900GTX) are all faster then the "midrange" cards that are the 8600GT/S. (I consider the 8800GTS cards to be the low highend cards, not midrange.)
October 23, 2007 1:09:46 AM

4745454b said:
Uhhhmmm, why? The 7600GS is also from the newer generation, and it isn't faster then the 6800GT. Neither is the 8500GT, which is two generations newer. The 8600GT is newer then the 7600GT, and they perform about the same. Which generation a card is in has little to do with its performance.

The 6 and 7 series are very similar. The 6600GT and the 7600GT even share the same pinout. If you have a board that runs a 6600GT, in theory you can remove it and run a 7600GT instead. The seven series is a die shrink and faster speeds of the six series. The example using the 6800GT and 7600GT is the one I commonly use to show people the bitness doesn't really matter, its only part factor in the equation. (just like memory bandwidth is only one factor in the equation of what card is faster.) You shouldn't slap a 256bit memory interface with fast memory on the 7300GS, the chip is so weak that it can barely handle the 128bit interface that it is sometimes given.

you are right about the 256bit bus. And also I meant that the Architechure is what really speeds up the cards. And the clocks, shaders, ROPS also. But you cant compare the 6800GT to the 7900 or 7950. My conclusion is that the over increase of performance is dependent of the architecture and clock speeds. But the 8600GT is more powerful than the 7600GT.
http://www23.tomshardware.com/graphics_2007.html?modelx...
October 23, 2007 1:12:12 AM

4745454b said:
They do share the same pinout.

http://www.dailytech.com/article.aspx?newsid=908

Quote:
Earlier today we reported about NVIDIA's new GeForce 7900GT -- the 90nm die shrink from GeForce 7800GT...NVIDIA deserves some well deserved credit for the foresight to keep GeForce 6600GT and 7600GT pin compatible.


I'm sure Nvidia did more then just shrink the die and bump up the clock speed, but the 6600 and 7600GT's similar. (similar, not exact.)

Look back at what I wrote, I never said the 7600GS was faster then the 6800GT, I said the 7600GT was. Striker was the one who said newer generation cards are faster then older ones. I said if that were true, then the 7600GS would be faster, as would the 8500GT, neither of which is.

While I don't agree with your first two points, the other two are true. The GS is a GT core, just a little slower. Last gen high end does tend to be as fast as the new gen mid range, for the most part. The GF8 series isn't really true for this, and the last gen high end cards (7950GT, 7950GX2, or 7900GTX) are all faster then the "midrange" cards that are the 8600GT/S. (I consider the 8800GTS cards to be the low highend cards, not midrange.)

Hmm I never said that the newer gen is always faster. I said that the architechure is what helps make the card faster obviously. When the Rv670 and 8800gt comes out, we'll have a midrange and we can compare the 7900's with them :non: 
October 23, 2007 1:29:54 AM

I just hope it runs DX10 games worth a crap. Tim Absath from cad-comic.com has a dual 8800GTX system with a X6800 processor clocked at 2.93 gigahertz and 2 gigs of low latency RAM, and he's been reporting horrible frame rates on the latest DX10 game demos.

If the 38xx series from Radeon isn't powerful enough to run this years killer apps on all "high" settings, then forget about it. I'll just buy something cheaper and run it on low.
a c 171 U Graphics card
October 23, 2007 1:38:19 AM

Quote:
My conclusion is that the over increase of performance is dependent of the architecture and clock speeds.


This is correct. I posted because you seemed to be saying that moving to PCIe 2 was going to make our cards all run faster, 2x as fast. Looking at the link you provided, I would argue that everything between the x850XT and the x800XL are all "equal". Some will be faster here or there, but overall, if you and any of these cards in identical machines, you wouldn't be able to tell "overall".

I said "overall" because its possible some of these cards will be faster then others at different settings/resolutions. Look at the Doom3 results at 1280x1024. The 8/7600s are "identical" at 44FPS. The x850XT that "overall" is the same speed is just over 50FPS. I'm sure there are examples. But thanks for the link, I'll have to remember that the 8600GT is equal to the 7600GT, and the 7600GT isn't the faster card.
October 23, 2007 2:13:50 AM

scryer_360 said:
I just hope it runs DX10 games worth a crap. Tim Absath from cad-comic.com has a dual 8800GTX system with a X6800 processor clocked at 2.93 gigahertz and 2 gigs of low latency RAM, and he's been reporting horrible frame rates on the latest DX10 game demos.

If the 38xx series from Radeon isn't powerful enough to run this years killer apps on all "high" settings, then forget about it. I'll just buy something cheaper and run it on low.

If the GTX can run things then the 3800's probably can't be too much better. But remember that the 3800 and 8800GT will give alot of performance for the money so it will probably be a better deal than the 8800GTX
October 23, 2007 2:15:10 AM

4745454b said:
Quote:
My conclusion is that the over increase of performance is dependent of the architecture and clock speeds.


This is correct. I posted because you seemed to be saying that moving to PCIe 2 was going to make our cards all run faster, 2x as fast. Looking at the link you provided, I would argue that everything between the x850XT and the x800XL are all "equal". Some will be faster here or there, but overall, if you and any of these cards in identical machines, you wouldn't be able to tell "overall".

I said "overall" because its possible some of these cards will be faster then others at different settings/resolutions. Look at the Doom3 results at 1280x1024. The 8/7600s are "identical" at 44FPS. The x850XT that "overall" is the same speed is just over 50FPS. I'm sure there are examples. But thanks for the link, I'll have to remember that the 8600GT is equal to the 7600GT, and the 7600GT isn't the faster card.

LOL did I say that? My bad. :lol: 
Yeah I just cant wait for the new cards to come out!!
October 23, 2007 10:12:00 PM

My only complaint...WHY IS IT RED AGAIN AND AGAIN AND AGAIN!!! I know ATI is just always red, but it gets tiring. Nvidia offers it in different colors at least...well usually.

Now my question on PCIe 1.0: Is it being limited already by an 8800GTX? If no, how much more performance, in respect to the 8800GTX, before being limited? My guess is 1.5 times the GTX...
October 23, 2007 10:14:32 PM

Huh? what do you mean by that PCI 1.0 is limited?
There are other colors of ATI. Look at the HIS cards.
October 23, 2007 11:29:53 PM

I think he is wondering if the GTX is being held back by the bandwidth of PCIe 1.0 on the 16x slots.
The answer, nope. I doubt anyone could give you an exact number, maybe an estimate, but it'd be difficult to come up with one.
James
November 27, 2007 10:05:17 AM

Reading this im trying to figure out what the pci interface and the memory data buss have to do with each other.. 512bit data buss from gpu to video memory is nice but i dont see how pci 1 or pci 2 is going to effect that at all since its a internal function.
a c 171 U Graphics card
November 27, 2007 3:43:56 PM

There are two ways a video card can be starved for information. The first is if you have a low memory bus. (64bit for instance.) This prevents the GPU from getting information from the memory fast enough, causing the GPU to sit and wait. With the card sitting and waiting for information from the cards memory, no work is being done and the card is slower then it should be.

The other way to slow a card down is to put it on a slow bus. (the following can also be the case if your CPU is "bottlenecking" the GPU) If the bus is to slow for the GPU, then the GPU is again waiting for information. This can be caused by either either a slow bus, or a slow CPU. The more the card has to wait, the slower it will be.

In most cases, you don't really need to worry about either of these. As long as you buy at least a midrange chip, the memory bus will be fast enough. AGP 8X is fast enough for most uses, only now being to slow for some things. If you have a chip on the PCIe bus, you don't really need to worry about whether its version 1 or 2.
November 27, 2007 4:58:00 PM

Number 1 guys .... who gives a crap what color the card is ? I only see it when I install or clean any dust from my system . Number 2 .... the ATI's are running cooler with less power draw .... number 3 the nvidia's are already behind NOT implementing DX10.1 on any new card ... number 3 ... when games are written for dx10.1 .... you'll already have the capabilities with the ATI line ... not any nvidia ... the ATI's will smoke the nvidias on graphics detail then ... and as we all know .... nvidia still has that lower quality video bleeding as they always have . Both the nvidia g92 and ATI 3870 series are well above in design and power and cooling versus the 1st gen nvidia cards . " Sorry Charlie ".... as the tuna commercial says and dragnet " only the facts maam " . The ATI's have a much superior cooling system .... face it ... and are already 1 generation ahead at a lower price . That crap of faster with mini framerate scores at worse video quality does'nt cut it .
November 27, 2007 5:37:29 PM

Because....DX10 is so mainstream right now the we really need to worry about DX10.1, right?? When either one of these API's are mainstream the current cards from either manufacturer will be old news.
November 28, 2007 3:41:55 AM

trooper1947 said:
Number 1 guys .... who gives a crap what color the card is ? I only see it when I install or clean any dust from my system . Number 2 .... the ATI's are running cooler with less power draw .... number 3 the nvidia's are already behind NOT implementing DX10.1 on any new card ... number 3 ... when games are written for dx10.1 .... you'll already have the capabilities with the ATI line ... not any nvidia ... the ATI's will smoke the nvidias on graphics detail then ... and as we all know .... nvidia still has that lower quality video bleeding as they always have . Both the nvidia g92 and ATI 3870 series are well above in design and power and cooling versus the 1st gen nvidia cards . " Sorry Charlie ".... as the tuna commercial says and dragnet " only the facts maam " . The ATI's have a much superior cooling system .... face it ... and are already 1 generation ahead at a lower price . That crap of faster with mini framerate scores at worse video quality does'nt cut it .


As much as i like new tech in my personal pick of video cards... forget about dx10.1 i wana see a dx10 title at least. Well one thats any good thats fully dx10. I play wic but it only has what the xbox has as far as dx10. Some dx 10 options but not full dx 10. Please also dont tell me crysis has full dx10 cuz i could really care less about that game :/ 
November 28, 2007 4:46:30 AM

Not really trooper1947 directx 10.1 isnt going to be the second coming and really, given current direct x 10 performance there is no reason to think that tesselation is going to result in ati having better performance than its rival. Pci X 2.0 wont change a thing performance wise as the pcie 1.0 hasnt been saturated yet. And its not the effing cooling thats superior its the die shrink! ATI isnt a "generation ahead" as the market hasnt even begun to use tesselation, when that time occurs is when you'll be able to make conclusions about ati being ahead in the game. Please stick to facts and stop spreading this crap disinformation to the forum members.

I really wish ati was ahead in the game because than maybe they would be able to improve their drivers so you dont need to restart 20 effing times every time you update them.
!