Sign in with
Sign up | Sign in
Your question
Closed

DX 10.1 games comig soon

Last response: in Graphics & Displays
Share
a b U Graphics card
June 26, 2008 7:00:19 AM

Saw this post on another forum. Its from a AMD rep. It looks promising. Tech advancement in software at last. http://rage3d.com/board/showthread.php?p=1335512390#pos... "Something that hasn't been noted yet is that EA and SEGA are signed on with us for DX10.1 titles, we've also signed a fairly major deal with Blizzard.
__________________
"Wavey" Dave

--------------------------------------------------------------------------------
Last edited by Dave Baumann : Yesterday at 12:58 AM. " If they release these games too quickly, a 4870 will be close or beat a G280 in this format

More about : games comig

a b U Graphics card
June 26, 2008 7:06:21 AM

Unless it is implemented like the current DX10.0, ie. tacked-on code.
a c 130 U Graphics card
June 26, 2008 7:10:31 AM


I know things seem to be swinging ATI's way at the minuite but i cant see them leaving Nvidia users out in the cold on this one.
Mind you "signed on " could be read as agreed to in future ? How long does it take to develop a game ?
Mactronix
Related resources
a b U Graphics card
June 26, 2008 7:11:56 AM

Depends if they re-use the engine from another game or not. Didn't the devs making COD5 say they could do it in 1 year?
a b U Graphics card
June 26, 2008 7:18:29 AM

Theres speculation that several games coming early next year or sooner. We know that currently the G200s dont or cant use it by the TR review of the 4850, where they used the "unpatched" version or AC
June 26, 2008 7:44:52 AM

If games are comnig next year, that is time for Nvidia to implement DX10.1, unless they simply decide to bludgeon their way through with just DX10 only and massive amounts of raw power.
June 26, 2008 7:55:06 AM

I thought nVidia "swore" never to use 10.1....
This will be a great test for them.
a b U Graphics card
June 26, 2008 9:32:23 AM

Basically free 4xAA
a b U Graphics card
June 26, 2008 9:40:21 AM

JAYDEEJOHN said:
Basically free 4xAA

To my understanding the cards are just required to support 4xAA, which they already do anyway.
a b U Graphics card
June 26, 2008 9:46:56 AM

That rings a bell. I'm pretty sure the "free AA" thing is a myth caused by twisted information. It gets worse as people pass it on, like Chinese whispers (probably originating in Chinese too).
June 26, 2008 10:05:32 AM

It would be really nice if blizzard could make DX 10.1 on Starcraft 2 or EA would make it on Call of Duty: Worlds at war.
June 26, 2008 10:29:57 AM

printz_asger said:
It would be really nice if blizzard could make DX 10.1 on Starcraft 2 or EA would make it on Call of Duty: Worlds at war.


Starcraft 2
Diablo III

..........drool........
June 26, 2008 12:27:55 PM

Quote:
What's the big deal of 10.1 anyway


In DX10.1 AA and shader effects are fed in single pass. In DX10.0 (A.K.A. The Way It's Meant To Be Played) First shader effects then rendering in seperate passes. This is the reason nVidia cannot support the 10.1 way of things. They have to throw away that highly optimized and fine tuned AA hardware engine and create a design similar to ATI HD series. (Remember HD2900 fiasco? ATI had to lose 1 generation of GPUs to fine tune the shader AA)
June 26, 2008 2:02:20 PM

At least ATI moving to the right direction, closer cooperation with game devs will be handy for gamers and developers themselves (10.1 coding has some advantages). Ofc nVidia screwed themselves by not supporting new tech, but its nothing major anyway, though geforce owners will be shafted a bit. Even minor attempt in Assasin creed to bring 10.1 boosted AA speed by ~20% for Radeon cards, imagine fully optimized game for it?

Another thing, its a big question if Blizzard will go full out with 10.1, they are very cautious of using DX10 at all. EA is another story, would be sweet to have COD5 with full 10.1 features.
June 26, 2008 2:03:15 PM

AMD/ATI's game plan...much like the underpants gnomes...

Phase 1: R700
Phase 2: DX 10.1
Phase 3: Profit
June 26, 2008 4:37:51 PM

Dude DX10.1...problem is no one wants to run Vista crap to utilize DX10! A lot of stuff now is just badass DX9 titles...funk DX10!...and especially FUNK DX10.1! LOL.
a c 130 U Graphics card
June 26, 2008 4:53:43 PM

Waspy said:
Dude DX10.1...problem is no one wants to run Vista crap to utilize DX10! A lot of stuff now is just badass DX9 titles...funk DX10!...and especially FUNK DX10.1! LOL.


Thats just the thing though Waspy, its these kind of improvements that are needed to convinse people to move to vista or windows 7 or whatever follows.
Early doors Vista was slow and a systems hog. Probably no more so relativly speaking than anything that has come before though. Once games start showing consistant high fps then most peoples objections will dissapear.
If these DX10.1 titles come out and people take to them,(from what i have seen and heard there is no reason not to) then Nvidia could just get a taste of what its like to be put out in the cold.
Differance would be that it was superior tec that did it and not some half baked DX implementation deal that did for them.
I suspect that they are alreadt working on DX 10.1 hardware though. Many dissagree with thier ethics and business practices but one thing they aint and thats silly.

Mactronix
June 26, 2008 4:54:16 PM

Too bad XP doesn't support it. Micro$oft and their evil schemes to get people to buy Halo2.
a b U Graphics card
June 27, 2008 12:54:59 AM

Vista is not much slower than XP now, but it will never be faster that's for sure. And guess what, Windows 7 will be slower than Vista. Why? Because software only ever gets bigger and slower. That is why we don't have ever increasing framerates in new games with hardware that is 4x more powerful than a few years ago.
June 27, 2008 1:44:38 AM

printz_asger said:
It would be really nice if blizzard could make DX 10.1 on Starcraft 2 or EA would make it on Call of Duty: Worlds at war.



Call of Duty is an enhanced COD4 engine, with enhanced physics more than anything.
June 27, 2008 1:45:30 AM

L1qu1d said:
Call of Duty is an enhanced COD4 engine, with enhanced physics more than anything.


Its not even made by the original Company that made 1 2 and 4. So don't get your hopes up:) 

EDIT: I have no Idea why I quoted myself I clicked edit lol Anywho!
a b U Graphics card
June 27, 2008 1:55:01 AM

That's good in a way, COD4 physics were kinda non-existent.
June 27, 2008 2:14:51 AM

yeah, I'm really liking the New brothers in Arms Game :)  There are amazing physics there, destructible covers:) 

a b U Graphics card
June 27, 2008 2:18:17 AM

I played the original Brothers in Arms and absolutely hated it. One of the most (if not the most) repetitive "recent" games I have ever played. I didn't bother getting any expansions or sequels if there are any.
June 27, 2008 3:28:27 AM

I found it the same, but the new 1 apparently is the ****!
June 27, 2008 6:37:39 AM

Haha, my dad loves BIA, it's the only game he plays nowadays...
a b U Graphics card
June 27, 2008 6:39:39 AM

But all you do is lay down a base of fire, flank and kill. You also use the same conveniently shaped dirt mounds as cover.
June 27, 2008 6:49:04 AM

Yup I know. My father's 50 years old, so he likes to take it slow.
a b U Graphics card
June 27, 2008 6:50:06 AM

Fair enough :D 
June 27, 2008 6:51:29 AM

Quote:
What's the big deal of 10.1 anyway


Don't you hate noobs who just think what they're talking about when they know like, nothing?
June 27, 2008 10:49:13 AM

randomizer said:
That rings a bell. I'm pretty sure the "free AA" thing is a myth caused by twisted information. It gets worse as people pass it on, like Chinese whispers (probably originating in Chinese too).


Randomizer, in fact it's NOT free 4xAA. Just applying all shader filters AND AA at the same pass. This is the place we get the performance increase.

When you look at the shader code for many effects, they're also doing some of the calculations required for AA. So, shader scheduler just combines and reschedules the code to execute it more efficiently.

The problem with nVidia isn't supporting AA on shaders is, in fact you need a f***ing amount of shader processors (look at HD4800 series cards from ATI: they have 800) to get a decent performance. nVidia doesn't have this amount of shader processors. Instead, nVidia has a dedicated AA hardware processor which is extremely optimized and in fact pretty well working.

Something off the topic: With the size of GX200 GPUs from nVidia, it might be a little bit of problem for them to pack up that much shader processors in an area as small as R770.
a b U Graphics card
June 27, 2008 11:51:04 AM

Does this mean that nVidias dedicated shaders are incapable, or harder hit, vs ATI, s which are more flexable?
a c 130 U Graphics card
June 27, 2008 12:12:51 PM

I could be completely wrong here so please correct me if i have this A about T.
I was of the belief that the ATI cards since the 2xxx series were doing the AA in the shaders as was the original spec for DX10 pre the M$ Nvidia thing. And now with the 4XXX series cards they were going back to doing it completely in the rops, IE going from software back to hardware AA.
Is that right or have i got this all wrong ?
Mactronix [edit for spelling]
June 27, 2008 12:24:03 PM

JAYDEEJOHN said:
Does this mean that nVidias dedicated shaders are incapable, or harder hit, vs ATI, s which are more flexable?

As I wrote, they aren't incapable. In fact, they are extremely efficient.
BUT they need an extra pass AFTER shaders are applied.

So, because of this reason, HD2900 was badly hit by the nVidia optimized games. Although it was a DX10 card, it was built DX10.1 in mind. As you all know, DX10.1 is in fact, the original DX10 which MS had to cripple in order to give nVidia G80 cards with the title "DX10 supported", as there were no other GPUs even getting near it.

When you write a game which is using DX10.1, vs DX10, DX10.1 should be around 20-40% faster than DX10 counterpart because all shader effects and AA are processed in one pass, not 2 passes.

ATI HD series GPUs have a very high number of shader processors to cope with this kind of a problem. (HD2900 , HD3800 have 320, HD4800 has 800)

Considering the fact that nVidia has now 240 shader processors in an area which is 2,5 times larger than R770, sticking in 800 of them would be a GPU about the size of the graphics card itself. So, nVidia is sticking with "The Way It's Meant To Be Played" as long as it can. But at the end, we can expect them come with it with the next generation (GTX200 isn't a new generation, it's an increment on G80 architecture) as CUDA and PhysX would also benefit highly from this transition.
a b U Graphics card
June 27, 2008 12:26:49 PM

From what I gather, theyre doing both, or to be more precise, capable of either way. I know that nVidias shaders are more of a dedicated type, whereas ATI's are more flexable in what they can do. If theyre being done in shaders, thwey still have to have the drivers set for it. I understand you dont have to have an extra pass to complete up to 4xAA resolve. Thats why its faster. So, in essence people equate that as being free, which is technically wrong, but its much more efficient.
June 27, 2008 12:31:58 PM

JAYDEEJOHN said:
Does this mean that nVidias dedicated shaders are incapable, or harder hit, vs ATI, s which are more flexable?


They would take a harder hit because there simply are less shaders to distribute the work to. I suspect it could mess up their SF unit which would really, really slow things down in games that actually do complex shader processing.

mactronix said:
I could be completely wrong here so please correct me if i have this A about T.
I was of the belief that the ATI cards since the 2xxx series were doing the AA in the shaders as was the original spec for DX10 pre the M$ Nvidia thing. And now with the 4XXX series cards they were going back to doing it completely in the rops, IE going from software back to hardware AA.
Is that right or have i got this all wrong ?
Mactronix [edit for spelling]

You are correct. Well, technically speaking they are not going back. They never started to do AA with their shaders since DX10 as it was specified before the NV debacle never took off. And to be really picky about it, actually AMD didn' fold either. They improved their Rops but the massive increase in Shading-Power will keep them more than able to do AA with those units if a developer want to use it. As i understand it, many developers didn't like the Shader-based AA because of it's implementation. I don't know the details about that though.
June 27, 2008 12:34:55 PM

Yeah! Exactly.
Being a physicist and applied mathematician, I can explain the logic behind it but no need for that. :) 
Just most of the Effects already do some of the calculations required for AA. so, when AA phase has come, we already have 20-40% of the preliminary calculations done. Just completing the rest, we han finish with AA quicker than raising the interrupt to tell the app that we finished with shaders and Application calling back and saying "OK, now do the AA" we have the performance hit if nothing else just because of that round trip.
a c 130 U Graphics card
June 27, 2008 12:35:32 PM


Cant wait for the guys at B3D and the Tech Report to do their full reviews i really want to know what the difference is that makes the AA so much better.
Mactronix
June 27, 2008 12:40:00 PM

Slobogob said:
They would take a harder hit because there simply are less shaders to distribute the work to. I suspect it could mess up their SF unit which would really, really slow things down in games that actually do complex shader processing.


You are correct. Well, technically speaking they are not going back. They never started to do AA with their shaders since DX10 as it was specified before the NV debacle never took off. And to be really picky about it, actually AMD didn' fold either. They improved their Rops but the massive increase in Shading-Power will keep them more than able to do AA with those units if a developer want to use it. As i understand it, many developers didn't like the Shader-based AA because of it's implementation. I don't know the details about that though.


Well the messy part is, a programmer should think more before submitting batches of work to the GPU. In the seperate AA, you just set the flag saying that you want this or that AA, but with shader AA, you need to include it into the HLSL you're submitting. Many DX programmers don't like to hassle with HLSL, but just call some functions which were written a couple of years ago as the "game engine".

It's not the game programmers, but the game engine programmers, who don't want to use it much. They have to make too many changes in their programs. (At the very least, they have to remove all calls for requesting AA and change all HLSL code to include AA in them. It's a massive work, in fact.)
a b U Graphics card
June 27, 2008 12:40:53 PM

I think duzcizgi sorta hit it on the head.
a b U Graphics card
June 27, 2008 12:44:03 PM

Not a huge difference from the ground up tho right? And wasnt AC from the ground up?
June 27, 2008 12:48:57 PM

JAYDEEJOHN said:
Not a huge difference from the ground up tho right? And wasnt AC from the ground up?


Pretty much yes. It's a PS3 & XBox 360 title ported to PC. BTW, XBox 360 has the same GPU architecture as HD series ATI GPUs. ;) 
June 27, 2008 12:52:03 PM

duzcizgi said:
Considering the fact that nVidia has now 240 shader processors in an area which is 2,5 times larger than R770, sticking in 800 of them would be a GPU about the size of the graphics card itself. So, nVidia is sticking with "The Way It's Meant To Be Played" as long as it can. But at the end, we can expect them come with it with the next generation (GTX200 isn't a new generation, it's an increment on G80 architecture) as CUDA and PhysX would also benefit highly from this transition.

A very interesting point. Another thing to remember though, is that Nvidia tends to clock their shaders higher than AMD. A lot actually. If you compare the clock of the 4850 Core to the GTX Shader speed, Nvidia runs it at roughly 2x the speed.
I suspect that this is the only way they can keep up with the raw processing power of the R770 (even though the raw processing power of the 48xx series measured in Tflops exceeds the GTX, There is a huge difference between theoretical flops and the real code running on the card actually doing something). The extra pass of AA hits AMD way harder than NVidia. Once Dx10.1 gets used, NV will take a hit, but it won't be as big as the hit AMD took with their 2xxx and 3xxx series. 20-40%, as you mentioned, is quite probable and a good guess.
As you already pointed out CUDA and PhysX will play a bigger role for Nvidias next Generation. I'm curious if they just add more Shaders with slight modifications (to keep compatibility) or if they will throw in a few new, more specialised "shaders" just to keep AMD out of their CUDA/Physx scheme.
The new GTX series is nice and everything, but it is not like the 8800GTX. Time won't be as kind to it as it was on the original GTX.
June 27, 2008 12:59:57 PM

Slobogob said:
A very interesting point. Another thing to remember though, is that Nvidia tends to clock their shaders higher than AMD. A lot actually. If you compare the clock of the 4850 Core to the GTX Shader speed, Nvidia runs it at roughly 2x the speed.
I suspect that this is the only way they can keep up with the raw processing power of the R770 (even though the raw processing power of the 48xx series measured in Tflops exceeds the GTX, There is a huge difference between theoretical flops and the real code running on the card actually doing something). The extra pass of AA hits AMD way harder than NVidia. Once Dx10.1 gets used, NV will take a hit, but it won't be as big as the hit AMD took with their 2xxx and 3xxx series. 20-40%, as you mentioned, is quite probable and a good guess.
As you already pointed out CUDA and PhysX will play a bigger role for Nvidias next Generation. I'm curious if they just add more Shaders with slight modifications (to keep compatibility) or if they will throw in a few new, more specialised "shaders" just to keep AMD out of their CUDA/Physx scheme.
The new GTX series is nice and everything, but it is not like the 8800GTX. Time won't be as kind to it as it was on the original GTX.


In fact, the hit ATi/AMD got with their first implementation in R600/HD2900 is because they were changing their structure fundamentally and they passed their deadline by 6 months. So, they said: OK let's launch this and we'll iron out the problems in time later. Yes, they fixed most of the problems, but still, they saw the processing power of R600 wasn't enough for shader AA. So they upped shaders from 320 to 800. It gives them an edge on both GPGPU and Havok implementations also, as they can use all of the GPU's processing power for GPGPU (CUDA on nVidia) or physics applications. nVidia still can't use all the processing power of their GPUs. If they really want to compete with intel's Larrabee, they have to take the same road as ATi/AMD already taken. Just they also need time to come up with their new design.
a b U Graphics card
June 27, 2008 1:00:56 PM

True, but the G200 kicks the door open for CUDA etal, and thats a good thing, a real good thing
June 27, 2008 1:03:31 PM

duzcizgi said:
Well the messy part is, a programmer should think more before submitting batches of work to the GPU. In the seperate AA, you just set the flag saying that you want this or that AA, but with shader AA, you need to include it into the HLSL you're submitting. Many DX programmers don't like to hassle with HLSL, but just call some functions which were written a couple of years ago as the "game engine".

It's not the game programmers, but the game engine programmers, who don't want to use it much. They have to make too many changes in their programs. (At the very least, they have to remove all calls for requesting AA and change all HLSL code to include AA in them. It's a massive work, in fact.)

I agree. That's actually always the problem. Programmers usually do what they "can get away" with. I'm not blaming them. On really fast hardware optimization is not needed - at least on the PC. Optimization may be a huge cost (and time) factor, but on the PC it is mostly voluntary. If a game doesn't run blazing fast, wait three months and buy the next generation of GPUs. At least that's what is happening. There is no real incentive to implement these features. Investing more work to get better performance on some(AMD GPUs), but worse performonce on some other (NV GPUs) computers? That is kind of hard to sell.
Looking at it, it is no longer the software that has to be tailored. Sure, a few tweaks here and there but at some point it just gets easier to get more specialised hardware. And that is something nvidia has done quite well. They look at what is coming out during the release time-frame of their GPUs and tailor their hardware to suit it best. AMD takes a more flexible approach that involves more software.
June 27, 2008 1:04:36 PM

I agree jaydeejohn. It kicked the door "ajar" only. It's not open yet. They need more to get into the bandwagon. First of all, if they want it to be used in games, they need something that can cope with both the great graphics and those particles flying around. :)  You need more shader processors than 240 for this. Even 800 might not be enough. ;) 
June 27, 2008 1:09:49 PM

Slobogob said:
I agree. That's actually always the problem. Programmers usually do what they "can get away" with. I'm not blaming them. On really fast hardware optimization is not needed - at least on the PC. Optimization may be a huge cost (and time) factor, but on the PC it is mostly voluntary. If a game doesn't run blazing fast, wait three months and buy the next generation of GPUs. At least that's what is happening. There is no real incentive to implement these features. Investing more work to get better performance on some(AMD GPUs), but worse performonce on some other (NV GPUs) computers? That is kind of hard to sell.
Looking at it, it is no longer the software that has to be tailored. Sure, a few tweaks here and there but at some point it just gets easier to get more specialised hardware. And that is something nvidia has done quite well. They look at what is coming out during the release time-frame of their GPUs and tailor their hardware to suit it best. AMD takes a more flexible approach that involves more software.


They both have their strong and weak points. The problem is, programmers are lazier than before. With all these RAD tools to do the dirty work of coding, they just click drag and drop. (Hey, I'm also earning my living with these click drag and drops. :p  I know it for good. ) There's no sufficient education or training for efficient programming except for msdn.microsoft.com but even there, most of the best practices aren't implemented by programmers because they involve actually writing code!

That's why, any given game written on top of a given engine has more or less the same performance level. You can expect the upcoming games using Cryengine 2 would perform on par with Crysis.
a b U Graphics card
June 27, 2008 1:17:39 PM

From what Im reading, the density of the ATI shaders is good, and when they finished the R7 series, they actually had more room than they needed, which gave them the extra numbers we see. Im hoping it continues. Rumors of the R8 series have 1600 shaders if I recall
June 27, 2008 1:19:37 PM

duzcizgi said:
In fact, the hit ATi/AMD got with their first implementation in R600/HD2900 is because they were changing their structure fundamentally and they passed their deadline by 6 months. So, they said: OK let's launch this and we'll iron out the problems in time later. Yes, they fixed most of the problems, but still, they saw the processing power of R600 wasn't enough for shader AA. So they upped shaders from 320 to 800. It gives them an edge on both GPGPU and Havok implementations also, as they can use all of the GPU's processing power for GPGPU (CUDA on nVidia) or physics applications. nVidia still can't use all the processing power of their GPUs. If they really want to compete with intel's Larrabee, they have to take the same road as ATi/AMD already taken. Just they also need time to come up with their new design.

Given the heat and power specifications of the R600 i really doubt they could've packed in more shader back then, which supports your point. I suspect the shading power of GPUs has to grow again by quite a lot if both Physics and regular "graphics" have to be done on the GPU. The number of threads a R700 can tackle on seems too low for that. Maybe not for now, since there are no games that actually employ physics in a prominent manner, but once they do, the current amount won't be enough. It's actually interesting to see, but i think with these more programmable shaders we will see a transition from graphic cards to something we can actually call a gaming or processing cards (depending on what it is used for).
!