Sign in with
Sign up | Sign in
Your question

.

Tags:
  • Nvidia
  • Graphics
Last response: in Graphics & Displays
Share
November 15, 2006 8:30:19 PM

More about : question

November 15, 2006 8:52:00 PM

Heh you ATI fanbois make me laugh... just trying to find anything whatsover bad to say about the nVidia card.

Who cares if dynamic branches take 0.95Ms on the 8800 instead of 0.5 on the 1950? The 8800 pounds the 1950 into the ground and dances on its grave in overall performance.
a b Î Nvidia
November 15, 2006 9:01:49 PM

Quote:
Heh you ATI fanbois make me laugh... just trying to find anything whatsover bad to say about the nVidia card.


NIZ I've seen your posts, and you're the same as the guy above only you do it for nV.
(I'll edit just to say maybe, as I haven't noted Raystonn's posts so maybe I'm mischaracterizing him in relationship to you, that may be insulting him, cause we know you shill for sure).

Quote:
Who cares if dynamic branches take 0.95Ms on the 8800 instead of 0.5 on the 1950? The 8800 pounds the 1950 into the ground and dances on its grave in overall performance.


Actually re-read his post, the GF8800 doesn't do it in 0.95ms, that's the X1800, the GF8800 does it in 11 ms, no decimal.

Dynamic branching is actually quite important, but from the review @ behardware;
http://www.behardware.com/articles/644-6/nvidia-geforce...

the impact won't be as detrimental in games thanks to the very fast speed of the shaders, but it likely will cause issues for things like GGPU and maybe even Physics.
a b Î Nvidia
November 15, 2006 9:03:34 PM

Your right about vector versus scalar, but it's not 128bit, and the impact is made up for by the speed of the general stream processing shaders which run at almost 3 times the speed of the GF7/X1K ALUs.
a b Î Nvidia
November 15, 2006 9:20:13 PM

Quote:
The fact is, these fast shaders are taking 11ms to do a single branch...


I understand, I was only pointing to your second post actually with regards to calculations, not branching.

I'm not disagreeing with your first part, but the second part is negated by the speed.

Two different items.

Scalar vs Vector is compensated by speed, but the branch prediction is still a major issue.

ie Iagree with your first post, but don't think your second matters as much for the equality, but just like the Xbox360 the unified design doesn't mean 1:1.

Anywhoo, gotta leave work, so I'll have to reply to you after I get my car from Dodge.
November 15, 2006 10:24:42 PM

if its true, maybe it isnt such a big breaktrough, but, i have seen benchmarks in a lot of games and it shows improved image quality and improved framerates, especially and high resolution. For the end-user`s point of view, this is what we all want!
fanboism or not, the g80 is scoring high and ultimately this is the card to go with for now, untill ati will throw something in the dx10 war.

take care :) 
November 15, 2006 10:30:02 PM

a b Î Nvidia
November 15, 2006 10:43:11 PM

WOW hello stranger. Welcome back.
November 15, 2006 10:47:12 PM

Howdy, I’ve been a NVIDIA user for as long as they have been around.. About three years ago I decided to get into game development, because I saw room for improvement in games in general and wasn't satisfied with the current standards..

I'm glad to see someone else is running into the same results I've been getting with a G80..

I've been searching all over looking for information on dynamic branching with a G80 thinking I must have somehow missed something.. At first I was quite excited when they announced improved dynamic branching for the G80, because it would allow me to do so much more, I already have an engine I've worked on that makes use of dynamic branching.. I finally got my 8800 GTX recently, and was quite shocked to see how my app was performing after reading the tech briefs on G80 and looking at the graphs they showed comparing to ATI's latest Dx9 card..

To me, when I think Dynamic Branching + DX10, I’m seeing new ways of doing things, ways to stop using scanline rasterizers and go to a new technique that doesn’t have all the weaknesses of scanline, have virtually unlimited viewing distances, and overall allow deeper levels of interactivity in games. Remove the Dynamic Branching from that, and it’s a HUGE bottleneck to all of these new things I’d like to support, I’d basically be stuck doing things the same way DX9 does, which is far too limiting for my project.

I actually do consider myself a fanboi.. A NVIDIA fanboi for a long time now, which is why I hope this information gets out and NVIDIA is made clearly aware, because I want them to remain competitive. What it all comes down to is who can help make projects like mine a reality. If I have no choice, I’m willing to take a look at ATI for the sake of remaining objective.

I’m tired of seeing games that have nearly identical gameplay with a different theme or setting or just the same game with improved visuals.. End, the clone wars must.

-Regards,
CorDox
November 15, 2006 10:54:40 PM

November 15, 2006 11:32:52 PM

November 16, 2006 12:04:51 AM

How much does that extra couple milliseconds mean? What is dynamic branching, or whatever?

Enlightenment. Someone, please, I'm too lazy to Wiki it.
-cm
November 16, 2006 12:37:30 AM

hmmm thats very intresting
thanks for the heads up!
yeek! wonder how they'll preform when some of the more complex shader dx10 games come out
hopefully not as bad as you predict or lots of people are going to be dissapointed!
November 16, 2006 1:07:04 AM

Quote:
Heh you ATI fanbois make me laugh... just trying to find anything whatsover bad to say about the nVidia card.

Who cares if dynamic branches take 0.95Ms on the a8800 instead of 0.5 on the 1950? The 8800 pounds the 1950 into the ground and dances on its grave in overall performance.


I think the purpose of the post by the "fanboi" ws to point out the possibility that future titles will really be hampered by the 8800's hardware. No one is disputing its present-tense domination... they're just questioning its position when ATI has a DX10 card.

I've got an Nvidia in my desktop and an ATI in my laptop... so I'm not sure what you can accuse me of being.
November 16, 2006 1:38:00 AM

While if true, I don't think it directly effects the G80 as of now, however this may be an issue once Direct X10 is really in use. Good thing I'm waiting until March for my videocard upgrade.
November 16, 2006 7:37:24 AM

Quote:
this is interesting as IMO it adds weight to the see what r600 has to offer argument. if it is like the G80 in that its unified shaders are very fast and it also has the branch prediction abilities of the x1000 series cards( is that possible?) then in future games that make use of complex shaders, will it be alot more able to maintain performance in dx10 games with high settings?


What i'm wondering is, how long into the future will we have to wait for the use of complex shaders, if developers back nvidia.
a b Î Nvidia
November 16, 2006 8:08:50 AM

Quote:
I don't think it directly effects the G80 as of now,


Well it does affect the G80 now, and likely forever (unless some driver could fix what in every way appears to be a hardware limitation). Whether it will matter in the near term or long term is another story.

It may be like the FX in that the weaknesses aren't exposed as a 'current' concern in some game until long after people have moved on to the G90/R700, but who knows for sure at this point.

One thing that would affect it now is the G80's potential usefullness for things like Folding@Home.
November 16, 2006 8:32:57 AM

Might as well wair for R600 to compare in that case. However I do want to play FEAR in top candy during the Uni summer break. Might get GTS.
November 16, 2006 10:19:47 AM

Incredible fanboy bs. Many investigations of shader performance shows G80 superior. Not even worth a reply except for flame bait extinguishing,
November 16, 2006 12:14:56 PM

Quote:
Might as well wair for R600 to compare in that case. However I do want to play FEAR in top candy during the Uni summer break. Might get GTS.

Or Crysis :) 
November 16, 2006 12:39:15 PM

Quote:
For our project, the X1950XTX is about 1,000 times faster for the current scene complexity. As the scene complexity increases (which it definately will) projections are that the G80 will be outperformed by a factor of approximately 1,000,000. We use lots of dynamic branching to manage large sets of data in the shader. The G80 just isn't up to the task. I'm sure the R600 will only make this worse.

Performance in your DX9 games is nice, but you should be looking to the future.

-Raystonn


>> X1950XTX is about 1,000 times faster
>> the G80 will be outperformed by a factor of approximately 1,000,000

Dude you're just sounding ridiculous when you imply that a X1950XTX is ever gonna be faster than a G80 in overall performance let alone the ridiculous thousand or a million times you state.

>> DX9 games is nice, but you should be looking to the future.
Yeah its a shame the G80 doesn't do DX10. Maybe I should buy the currently available ATI/AMD card instead </sarcasm>
November 16, 2006 1:03:18 PM

Quote:
but, i have seen benchmarks in a lot of games and it shows improved image quality and improved framerates, especially and high resolution. For the end-user`s point of view, this is what we all want!


The problem with these benchmarks is they are running DX9 games, hence using SM3.0 so is a lot faster, which is still good.

What I, can probably most other people is interested in, is how well these cards will perform in SM4.0. Now the OP mentioned the branching is 20x slower, which could have a major impact in writing complex shaders.
November 16, 2006 1:09:16 PM

Quote:
For our project, the X1950XTX is about 1,000 times faster for the current scene complexity. As the scene complexity increases (which it definately will) projections are that the G80 will be outperformed by a factor of approximately 1,000,000. We use lots of dynamic branching to manage large sets of data in the shader. The G80 just isn't up to the task. I'm sure the R600 will only make this worse.

Performance in your DX9 games is nice, but you should be looking to the future.

-Raystonn


>> X1950XTX is about 1,000 times faster
>> the G80 will be outperformed by a factor of approximately 1,000,000

Dude you're just sounding ridiculous when you imply that a X1950XTX is ever gonna be faster than a G80 in overall performance let along the ridiculous thousand or a million times you state.

>> DX9 games is nice, but you should be looking to the future.
Yeah its a shame the G80 doesn't do DX10. Maybe I should buy the currently available ATI/AMD card instead </sarcasm>
LOL
November 16, 2006 1:14:02 PM

Try reading a professional analysis.

http://www.digit-life.com/articles2/...g80-part2.html


Quote:
That's another proof of the evident fact - G80 architecture is the architecture of the future. The harder a task, the more flexible shaders, the better this chip performs, breaking further away from competitors of the previous generation. As usual, branching is a weak spot of ATI's vertex unit. Let's hope that R600 will be a truly unified chip in this respect and the situation will change. As in case of G7X, G80 prefers dynamic branches to static ones.

Conclusions on geometry tests: G80 is an evident leader. Burdened with no SLI overheads and capable of directing all its 128 ALUs (operating at doubled frequency) to solve geometric tasks, this chip demonstrates excellent flexibility of the unified architecture and excellent capacities for working with complex dynamic code of vertex shaders. More that two-fold advantage - bravo! Let's see what awaits us in real applications. And we are looking forward to the release of DX10 that will help reveal full potential of this chip.



Quote:
Aha, here is food for thought. Firstly, the unified architecture of G80 DOES NOT depend on precision of calculations and storage of intermediate results. At last you don't have to save on quality - you can always use 32-bit floating point calculations that guarantee excellent results without any rounding artifacts. Like in case with ATI, the results are absolutely identical for any precision. Besides, GX2 slightly outperforms G80 in texturing-dependent Water test (48 versus 32 units and the total of 512-bit buses in SLI mode versus 348), we can even speak of parity. But G80 takes up the lead in a more computation-intensive lighting test. Excellent computational capacity, ALU, ALU, and again ALU :-).



Quote:
As we can see, G80 is always victorious (it's especially good at Frozen-Glass). GX2 noticeably lags behind due to SLI overheads and less flexible architecture. G80 performance does not depend on precision again. Now the same tests modified for texture sampling:

G80's advantage is less pronounced here, including absolute results. This chip certainly likes computations more than texturing. 32 texture units are necessary here. If the bus could be wider, there might have been 48 of them. But now GX2 leads in some tests. Too much depends on context and developers' preferences here. In order to reveal full potential of the G80, they will evidently have to choose (create) computation-intensive variants of their algorithms - in this case G80 will be able to gain 50% of performance.

And now the most flexible test - PS3. The test contains intensive dynamic branches in pixel shaders:

We have no doubts as to what architecture is the most advanced now and the best at working with dynamic branches in pixel shaders. It's G80. The second place is taken by unified RADEON, followed by GX2 (even SLI is of little help here for the old non-unified architecture).

Conclusions: Out of doubt, G80 is a new powerful computational architecture, well suited for executing the most complex pixel shaders. The more complex a task, the more computations it has, the larger is the gap between G80 and its competitors. In some cases programmers can get noticeable performance gains by optimizing their algorithms for computations instead of texture sampling. We can predict that there are some games, where the chip can gain advantage due to its 48 texture units and 512-bit memory bus. But the company makes a compromise here - it chooses flexibility and computational capacities for future applications.

G80 is the model platform for shaders with dynamic branching. We'll see what DX10 will bring us. We'll also see how it will change the layout of forces in real applications, especially in modern and outdated ones.








The last quote was for a shader with dynamic branching and it shows where the g70's fail compared to the x1950xtx and the g80 outperforms all the other cards.
November 16, 2006 1:23:17 PM

I understand what is being said. Every thing sounds reasonable as to the simple nature of the shaders employed by the G80. But, isn't that what most game developers are using? They have not taken advantage of the more complex shaders of the X1900 series of cards because if they are, I think we would have already seen the X1900 series of cards dominating everything. But that is not the case is it? Once developers start utilizing complex shaders, then we will see the G80 fall on it's face. But I think by the time that actually happens, it will be 2008 and Nvidia would have already realized they need to utilize a different technology. Plus, do you think ATI cards are going to be developed after next year. AMD wants nothing more then to be like Intel and last i remember Intel does not make performance graphics cards. AMD has already axed the All-in-Wonder line of cards. So since there will be no more ATI in the coming years, I think game developers will have to utilize the technology that Nvidia wants to employ. Oh by the way, I think this link has a video of Crysis being presented on the G80 line of cards. Looks to be running in direct X 10 pretty well... don't you think?
November 16, 2006 1:41:49 PM

Quote:
I understand what is being said. Every thing sounds reasonable as to the simple nature of the shaders employed by the G80. But, isn't that what most game developers are using? They have not taken advantage of the more complex shaders of the X1900 series of cards because if they are, I think we would have already seen the X1900 series of cards dominating everything. But that is not the case is it? Once developers start utilizing complex shaders, then we will see the G80 fall on it's face. But I think by the time that actually happens, it will be 2008 and Nvidia would have already realized they need to utilize a different technology. Plus, do you think ATI cards are going to be developed after next year. AMD wants nothing more then to be like Intel and last i remember Intel does not make performance graphics cards. AMD has already axed the All-in-Wonder line of cards. So since there will be no more ATI in the coming years, I think game developers will have to utilize the technology that Nvidia wants to employ. Oh by the way, I think this link has a video of Crysis being presented on the G80 line of cards. Looks to be running in direct X 10 pretty well... don't you think?


ATI, as it was said above me, is NOT going to stop making GPU's. I don't know where you got that information from but its wrong. AMD is going to take over the naming scheme for its Xpress chip's, but thats all.
November 16, 2006 1:43:41 PM

Well, I am sure they won't. Well that is if you consider onboard graphics to be a graphics card. :twisted:
November 16, 2006 1:52:06 PM

Quote:
Do you really want chip companies to dictate what onboard video you should use? I know that Dell loves this idea because they can lower their prices below eMachines standard.

Intel has been trying for years to promote onboard video, with very little success. I did put the Intel stuff to the test, and the results looked like a 1930s movie.

Now AMD want’s to shove ATI down our throats, because they assume that consumers are stupid and we need daddy Ruiz to show us the way Taco Bell.

The problem with video integration is that innovation from video card manufacturers has to slow down in order to keep up with chip companies. You know very well that when you buy a video card in a retail box, the drivers inside are already several versions old. If AMD want’s full video integration into their chips, then don’t expect much ATI innovation.

NVidia, 3Dlabs and Matrox are the only players left.
November 16, 2006 1:55:35 PM

Quote:
In other words, AMD’s future “Fusion” platform, which the company says could be ready by late 2008 or early 2009, won’t just be an Athlon and a Radeon sharing the same silicon. The companies will truly be looking into the potential benefits of using multiple pipelines along with multiple cores, as a new approach to parallelism in computing tasks.
November 16, 2006 1:58:30 PM

A gamer would :twisted:
November 16, 2006 2:13:30 PM

Yet another quote:
Quote:
In June, ATI reported that its fiscal Q3 2006 revenues were a healthy $652.3 million, reflecting a generally upward trend. But after cost of revenues and expenses were accounted for, income for the period just barely topped $27 million, after having reported a loss for the same period a year ago. ATI doesn't break down its income into divisions, so there's no way you can peer into the quarterly numbers to see how much of that revenue comes from Radeon cards, and how much from integrated graphics components - what AMD perceived to be ATI's most desirable feature. So with AMD pumping $5.4 billion into ATI, exactly how much of that investment reflects AMD's valuation of ATI's stake in the high-performance market? We asked AMD marketing architect Hal Speed. His response wasn't exactly specific. "The mainstream notebook market, as well as the broader commercial client market in desktop and notebook...was really the driving force behind this [merger]," Speed told TG Daily. "Obviously, ATI has a wealth of strength in the notebook segment, both for integrated chipsets and discrete notebook GPUs...The commercial client market is a little different than where we had traditionally been on the consumer/desktop side, so that's really been the driving force behind this."
November 16, 2006 2:30:24 PM

Intel is the largest supplier of Graphics Chips. That is a fact: Intel Largest Supplier So let's see, if AMD can make more money by using ATI's technology to help them with integrated graphics under there own name, then they can get their foot into the same market as Intel. AMD's main reason for acquiring ATI was to help with that market segment. So while AMD is investing all of that time into getting an integrated graphics solution finalized, they will start falling behind in the performance graphic market. AMD's main competitor is Intel. Am I right? So why even bother going after Nvidia when they are not direct competition in the first place. Just my opinion and nothing else. :oops: 
November 16, 2006 2:47:37 PM

Quote:
I understand what is being said. Every thing sounds reasonable as to the simple nature of the shaders employed by the G80. But, isn't that what most game developers are using? They have not taken advantage of the more complex shaders of the X1900 series of cards because if they are, I think we would have already seen the X1900 series of cards dominating everything. But that is not the case is it? Once developers start utilizing complex shaders, then we will see the G80 fall on it's face. But I think by the time that actually happens, it will be 2008 and Nvidia would have already realized they need to utilize a different technology. Plus, do you think ATI cards are going to be developed after next year. AMD wants nothing more then to be like Intel and last i remember Intel does not make performance graphics cards. AMD has already axed the All-in-Wonder line of cards. So since there will be no more ATI in the coming years, I think game developers will have to utilize the technology that Nvidia wants to employ. Oh by the way, I think this link has a video of Crysis being presented on the G80 line of cards. Looks to be running in direct X 10 pretty well... don't you think?


ATI, as it was said above me, is NOT going to stop making GPU's. I don't know where you got that information from but its wrong. AMD is going to take over the naming scheme for its Xpress chip's, but thats all.

Its round the other way. The ATI name will be used for the integrated gpu's only, and all other PC GPUs will be marketed under the AMD name.
November 16, 2006 3:15:57 PM

Quote:
I understand what is being said. Every thing sounds reasonable as to the simple nature of the shaders employed by the G80. But, isn't that what most game developers are using? They have not taken advantage of the more complex shaders of the X1900 series of cards because if they are, I think we would have already seen the X1900 series of cards dominating everything. But that is not the case is it? Once developers start utilizing complex shaders, then we will see the G80 fall on it's face. But I think by the time that actually happens, it will be 2008 and Nvidia would have already realized they need to utilize a different technology. Plus, do you think ATI cards are going to be developed after next year. AMD wants nothing more then to be like Intel and last i remember Intel does not make performance graphics cards. AMD has already axed the All-in-Wonder line of cards. So since there will be no more ATI in the coming years, I think game developers will have to utilize the technology that Nvidia wants to employ. Oh by the way, I think this link has a video of Crysis being presented on the G80 line of cards. Looks to be running in direct X 10 pretty well... don't you think?


ATI, as it was said above me, is NOT going to stop making GPU's. I don't know where you got that information from but its wrong. AMD is going to take over the naming scheme for its Xpress chip's, but thats all.

Its round the other way. The ATI name will be used for the integrated gpu's only, and all other PC GPUs will be marketed under the AMD name.

How wrong you are,
http://www.hkepc.com/bbs/itnews.php?tid=699056&starttim...

"After the completion of acquisition, AMD revealed the company’s plan to merge ATI’s chipset under the brand of AMD. Affected chipsets include those released CrossFire enabled RD580, RD550, RD480 and all future chipsets for AMD platform. The brand of graphic division of ATI is kept, however."
November 16, 2006 3:34:17 PM

I really don't think NV cares how G80 does in DX10. Everyone looking for a highend card will buy it because it's DX10. I see it as a test core. By the time we see DX10 in real use, the next core will be here. NV gets paid for the first DX10 card with no way to prove it, Thats just good business if you ask me. It is still the best card for the games out now and will play DX10 so really no one loses. The first of any new tech for the most part ends up being beta testing to work out the kinks.
November 16, 2006 3:35:32 PM

To the original poster: I suggest you post this on a more technical forum, such as arstechnica. You should not be surprised if you get a lot of fanboy responses here.

The links you provided show the performance of testing occlusion. If implemented correctly, it should not require dynamic branching. This is not saying that the nVidia card is necessarily slower; to do that you will have to create a stream processing code fragment for the ATI card and a CUDA application for the nVidia card.

No offense, do you mind me asking what company you work for? You should not be doing that much branching inside of a shader, even if it is advanced. You should be doing any branch testing with the CPU. For example, assume you have two lights, 1 meter apart, and 1 kilometer away from your object. You choose a shader that either disregards the lights if they are dim, or treats them as one light. You do only a few branches with the CPU, based on the position of the object, and no branches with the GPU.

As I said, you need to test the performance with something other than z testing. I have no idea why nVidia's cards are slow in that area; a z test, when vectorized properly, should only require one if statement and zero branches. You save the image of the buffer before the draw, simulate the draw on another buffer, in massive parallel take the difference in each pixel and add those, and then return whether that difference is greater than zero.
November 16, 2006 3:38:58 PM

Quote:
I really don't think NV cares how G80 does in DX10 ... Thats just good business if you ask me.


this is a stupid argument. nVidia certainly cares about their reputation. They spent four years and US$4.75e8 developing the g80. It was a major architecture redesign, featuring unified shaders, clearly suggesting intended DirectX 10 use. I would go with the link to the Far Cry performance, and my argument above, over this. http://www.anandtech.com/video/showdoc.aspx?i=2870&p=5
November 16, 2006 4:02:02 PM

then again i guess you could consider the algorithm whether to draw the object in the first place a branch.

I am suspicious of your data. You need to explain it more, and understand it more. Even things from Stanford you should attempt to truly understand rather than say immediately one card looks better in a graph. I honestly haven't read it all, nor do I think I need to until you explain how this is measuring branch performance. I am also at a complete loss why this is an exponential performance limitation.

Your arguments may be in good faith. However posting a bunch of unverified, unexplained facts about a technical aspect of a card in a rather non-technical forum - particularly if you do not reply to honest questions - indicates flaming. If you are in graphics and got misled by something, or your code isn't optimal, that's fine.

The link another user posted on another forum, which you did not respond to, explained how their benchmarks reflected dynamic branching. After you visit the non-translated page, you can view the translated page with google and the images will display correctly.

http://www.hardware.fr/articles/644-6/nvidia-geforce-8800-gtx-8800-gts.html.
November 16, 2006 4:08:52 PM

I dont give a damn at all about the G80, R580, or R600s performance in anything other than games.

I will not choose my Gfx card on the basis of how fast it runs folding@home.

Right now, the G80 is the fastest DX9 card in the world for gaming. PERIOD.

Right now, the G80 is the ONLY DX10 card availible in the world. PERIOD.

If I buy a G80, I will be buying it on those facts. If the R600 is better at DX10, AND I end up running Vista and playing DX10 games (And I dont forsee upgrading to Vista untill all the "security" crap has been delt with and I can run what drivers I like and the DRM schemes can goto hell), then I will consider an R600.

If the G81 (or whatever the refresh gets called) is even better for the games and applications I play, I will get that instead.

But as it is, the G80 is without a doubt the fastest card I can buy today for what I want to do with it.
November 16, 2006 4:28:03 PM

Oh I'm sorry to have invaded your thread.

My point was, that the Dynamic Branching performance is irrelevent to most people buying it.

Graphics cards are built primarily for games, and also for CAD type work. It has been pointed out that Dynamic Branching only becomes a factor in DX10 style complex shaders - something that means the X1950s performance with them is irrelevent, as it will never render DX10.

CAD style applications are generally not limited by shader performance, this is why the G71 based Quadros routinely beat the x1800 based FireGLs (they didnt even bother making x1900 based ones afaik due to the fact that the shader performance was irrelevent). As such I believe its a good bet Dynamic Branching is irrelevent here.

So, right now, it seems the only place it IS relevent is GPGPU applications. In other words, a niche application that is not the primary function of these products.

That was the point I was making in my post. If a 25 year old Lada has knobs for the heating that turn 20 times faster than the knobs in a new BMW, it doesnt make the Lada a better car.

The point you are bringing up is irrelevent to 98% of people. And I'm being generous to say that 2% of prospective buyers care about GPGPU performance. This thread could lead newbies to believe it is relevent to them.

I wont lower myself to your level by bringing personal attacks into play and saying things are off topic when they dont suit my opinion :roll:
November 16, 2006 4:44:31 PM

I agree with demo coder on rage 3d.

Quote:
In the mean time, you are making some pretty bold claims, so you may as well expect heavy criticism. You certainly aren't going to win any favors if you can't construct a simpler benchmark to prove your point, one that is open so that it may be examined.
November 16, 2006 4:47:57 PM

For what its worth, the **ONLY** reason I currently have a 7900GT and not an x1900XT is that the 7900GT performs better in City of Heroes, which I play far more than is healthy.

I'm more than happy to admit that the R580 is a much better chip than G71 for most games, just not the one I play the most!

It may well be that R600 is far better than G80, and I'm starting to think that nVidia have designed G80 to be the ultimate DX9 card with DX10 functionality tacked on, while the R600 seems the other way around to me, from the little I know of it.

The R600 will probably still be more than than adequate for any DX9 game you can throw at it however, as it will of course be faster than the x1950XTX, itself a very fast card.

As such there is a good chance I will change to the R600 when it is released. Just as I had had nothing but AMD CPUs for the past 10 years prior to getting a D805 for a C2D upgrade path about 4 months ago, I'm more than willing to switch sides if it suits me. (plus I got the D805 extremely cheaply, always a convincing argument)

If G80 was going to be released when R600 is, and nForce 680i wasnt promising to solve my FSB overclocking woes, I'd be looking at getting a pair of x1950XTXs in Crossfire right now.

This post was off topic, I'm just trying to say that although it has been a while since I had an ATI card (Radeon 8500 ftw), I'm not glued to nVidia.
November 16, 2006 4:58:59 PM

Quote:

How wrong you are,
http://www.hkepc.com/bbs/itnews.php?tid=699056&starttim...

"After the completion of acquisition, AMD revealed the company’s plan to merge ATI’s chipset under the brand of AMD. Affected chipsets include those released CrossFire enabled RD580, RD550, RD480 and all future chipsets for AMD platform. The brand of graphic division of ATI is kept, however."


I think what people are Trying to say is "read between the lines". you know how a company merger says that it will keep staff from the old company? 99% of the time they're worked very hard trying to integrate the merged products and technology and driven out or later terminated.

For the sake of ATI's stock price, they might be saying they will keep the desktop and high end graphics card market. But they may also plan to cut a few lines of their cards to save money and remove old lines that are not producing revenue.

Nothing against AMD directly, but I can't think of any company that keeps its merged company intact after a merger. There WILL be changes. There is no way AMD is going to let the previous ATI business plan to go unaltered. No company would be stupid enough to not try trimming what it sees as fat.

The goal was one that may doom ATI's high end graphics segment, just for the sake AMD needs the chipsets and integrated graphics. Remember the mergers and buyouts of companies in the early 2000's? How someone can buy a company doing 300Million/year in revenue and turn it into a 75million/year revenue lemon just to get at someone's technology.
November 16, 2006 4:59:09 PM

Quote:

Last time I checked 48 x 128 is greater than 128 x 32.
-Raystonn


Here is a good tip so that in the future you don't need to check whatever you are checking:

Since the same 128 is being multiplied in both cases, you simply have to look at the other two numbers. Since 48 > 32 then 48X128 > 128X32. See, that was easy and you don't need to check anything! I hope this helps with your dynamic "last time you checked" prediction. :) 
November 16, 2006 5:04:58 PM

Quote:
that is all fine and well, unfortunately you and every other idiot


Whats with you calling everyone an idiot just because they have a differing opinion than you and vice versa? What are you 12? :roll:
      • 1 / 2
      • 2
      • Newest
!