ATI whitepaper on DX11 Good read

jennyh

Splendid
I really hope AMD are throwing some money at a games studio right about now. What they need to have is a dx11 title on release, with those sort of screenshots they couldn't fail.
 
Whatll be interesting is nVidias response. If they say "it wont be utilized in games right away", then we'll know its TWIMTBP's at work, instead of any nVidia gpus, and also, itll be cutting their own throats as to the gpgpu usage/refinements we see in DX11.
Ill say it here. Theres no excuse for any part of the gaming industry to not embrace DX11 save possibly the consoles.
Therell be 2 OS's already to use it, therell be easily ported (reportedly) http://www.slideshare.net/repii/your-game-needs-direct3d-11-so-get-started-now?type=powerpoint DX10 to DX11 games thatll partially benefit from it, somewhat like we saw with AC, and we all know what happened there.

So, if the rhetoric starts about "how long before itll really benefit" starts, then people need to read up
 
Same can be said for physics as well, I forgot to mention that. Tesselation will be the thing that comes later, everything else will/should be available on most DX10/DX10.1 current cards, and of course all of it on a DX11 card
 

one-shot

Distinguished
Jan 13, 2006
1,369
0
19,310
I would suggest not clicking on the "leaked" screenshots. It brought me to a different sight and said I had spyware then appeared to run a false virus scan. Not going to click that again. I'm scanning my HDs with a REAL virus scanner and I'll see what happens.

On topic.

ATI looks to have a winner here. If they can get their product out before Nvidia does then they have a distinct advantage. And I'm not talking about the Assassin's Creed (dis)advantage after Nvidia got its way. My next card will be an ATI and this looks like a winner. Now we just need to see a wide adoption of DX11 and hope that console ports don't ruin everything.
 
@ One-shot: Off topic, but I've come across that several times this week, as long as you kill it by stopping the application in Task Manager you'll be fine.

On topic...Blast another improvement to my gaming experience;)
Hopefully by the time I rebuild next year they'll have stable drivers and the bugs out of Win 7. I'm finally starting to take notice of all the DX11 hype!
Jaydee, is n't tesselation already built into the HD4XXX cards?
 

4745454b

Titan
Moderator
"AMD" first put a Tess in the Xenos GPU. (Xbox 360 GPU) They have included one in every card since the 2xxx series. I don't remember who, but someone told me they'll have to tweak their Tess engine for it to be DX11 compliant. I say who cares, at least they don't need to build one. (and find out where in the chip it will be.) Considering how large Nvidia's GPUs are now, and how much they have to add, their next GPUs will be HUGE.
 
The tesselation engine currently in ATI cards are lame ducks. They dont work the same way DX11 will, as the shader method is used on a new type of shader. It could be used, and has been for a few older games, but it has to be specifically devved to it.
Its much like nVidia Physx, propriatary, but like 4745454b said, the jump to DX11 wont be as painful in die size for ATI, and also since theres background of having it, ATI should glean some expertise there as well.

Yes, unfortunately, the biggest eye candy from DX11 will be the tesselation abilities, but dont discount all the other improvements that we'll see, as itll just advance everything else DX10 downwords for eyecandy, as itll make all that easier to code for, and allot of it easier to process, on both gpu and cpu. Or, I dont hear allot of people complaining about Crysis type visuals, whichll be much more easily obtainable
 

cappster

Distinguished
Jan 24, 2007
359
0
18,790
I am not going to hold my breath on DX 11 as being the next greatest thing. DX 10 was supposed to revolutionize gaming, but as far as I can tell it hasn't. I will be slightly optimistic, but not overly excited about DX 11
 
This is what I wonder about all the supposed claims of DX10. Where are they? and, much more importantly, what were they? Most people cant explain either. For the most part, a good portion of DX10 isnt being used today, because DX10.1 carries alot of the perf improvements, which was alot of DX10 to begin with, and nVidia doesnt do DX10.1.
Having said that, what it has done is to simply pave the way for DX11. Remember, this is like going from DX8 to DX9, unlike where DX10, we saw a whole new set of rules that completely changed the direction of gpu HW, where, in ATI's case, its been implemented on 3 gens of HW, and only 2 on nVidia, with nVidia still not in DX10.1 compliant.

I think people err when they think of DX10 and its capabilities, when most of those capabilities is only used by 1 manufacturer. With Intel coming in the mix of things, nVidia had better straighten up its act, or itll be so propriatary, theyll propriatize themselves out of the market.

Whats that mean for us? nVidia should be compelled now more than ever to be in full compliance with todays standards, and thus, we will then see it also in our games as well

Heres a nice example of DX10 being used properly in a game
http://translate.google.com/translate?prev=hp&hl=de&js=n&u=http%3A%2F%2Fwww.pcgameshardware.de%2Faid%2C687117%2FAnno-1404-im-Test-Grafikkarten-CPU-und-DirectX-9-%2FDirectX-10-Benchmarks%2FStrategiespiel%2FTest%2F&sl=de&tl=en&history_state0=

And itll be better in DX11 with W7 also
 

4745454b

Titan
Moderator
One of the claims I heard people make before the benchies came out was DX10 was going to run code faster. I kept asking people how/why? Just because its a newer version of DX doesn't mean its going to change the world. A great example of this is the FX series from Nvidia. They could run DX8 code just fine, but choked horribly as soon as you asked it to do DX9. Its more intensive, it will require more power to run.

I think the biggest problem with DX10 is/was the massive shift. DX9 was similar to DX8. DX10 was a whole new thing, new OS, and new drivers. Its going to take time for everyone to catch up. We are now reaching the point where drivers have matured enough to not cause a performance hit running DX10 code. DX11 will continue with this.
 


It does, but then people add effects, they don't just do the same thing and give everyone 20 object at a 90% speed boost, they decide to render 100 objects at the same speed (or slower on some cards). The technology promissed speed increases like those we also saw in AC and ANNO but the devs tended to instead think it would be cool to render each Frickin pebble instead of doing a pebble strewn ground texture, which is not impressive except to another dev, and neither improve IQ much, nor improves performance. Also with the half-implementation alot was left on the cutting room floor especilally since the G80's geometry performance was terrible, and thus the geometry shader instancing was rarely imlemented for more than simply basic effects.

Just because its a newer version of DX doesn't mean its going to change the world. A great example of this is the FX series from Nvidia. They could run DX8 code just fine, but choked horribly as soon as you asked it to do DX9. Its more intensive, it will require more power to run.

That's a terrible example, the FX series was designed primarily as a DX8.1 card, not a DX9 card, and it ran DX8 & DX9 FP24&32 very slowly. It was an inefficient design that asked coders to go away from the standard and to code a wide path 2x2 instead of longer as asked for in DX9. They changed outside of spec, not inside of it, and it wasn't DX9's fault for the slow-down it was the FX's design's fault, which made sense if they were the only game in town or to market first and able to dictate development.

I think the biggest problem with DX10 is/was the massive shift. DX9 was similar to DX8. DX10 was a whole new thing, new OS, and new drivers. Its going to take time for everyone to catch up. We are now reaching the point where drivers have matured enough to not cause a performance hit running DX10 code. DX11 will continue with this.

Except in this case the hard part is done, the shift to Vista / 7 is well under way with it on most new PCs; and the DX10+ hardware is already out there, When the DX11 hardware hits, there's no waiting for SP1 like last time, WDDM drivers are mature, and really the focus of the IHVs more than legacy, there's also less of a split this time too, going from 10->11.

It'll be a while before games fully utilize the resources they have, but that's been the same for every DX and OGL launch, rarely using 1/3 of the available new features for at least a year or more.

The main thing is what's worth the time and effort, both to devs and players. The DX10 stuff in Company of Heroes got alot of hype, but it was a pretty crap example / implementation of the benefits of DX10, simply increasing the load and then doing that a little faster resulting in an overall performance drop with little difference in IQ.
With DX11 it could be ground-breakingly good (a step beyond AC, etc) or simply replacing code and making it 10% more efficient or just more 'busy' resulting in a 'who cares ?!?' result again.

Tesselation and Compute Shaders offer the most IMO, but they also don't offer as wide an install base if fully implemented; like you can still do Tesselation ala HD2K-4K with the DX11 hardware, just by omitting the Hull and Domain shader requirements, but you don't get the full speed potential or same level of geometry amplification, also you can run computer shaders with a few less requirements (like single precission so that the HD2K and GF8&9 series can participate [HD3&4K and GTX can do DP], or Fetch4/Gather4 which is already implemented on all ATi HD cards, etc). So do you implement full blown Tesselation in the first year or 70% of it? Compute shaders compatible with the ATi HD cards and nV GTXs or limit it to SM5.0 compute shaders? Finding that balance will be important, because both are easy, just decisions that need to be made, and might affect the wow factor. It's still useful, but maybe not as impressive. But the ease of implementation at least makes it more feasible to get it up and running than before.

The main thing isn't just that it's using all DX11 has to offer right away, it's getting it out there ASAP so that the clock to usability starts now instead of later, and until it becomes anywhere near a killer-must-have-app, in the meantime, just like before, they're going to be some of the fastest DX10 & DX9 pieces of hardware out there, which always makes waiting easier, instead of having to chose to give up performance to get features.
Even as bad as the FX and HD2900 series were for their respective generations, they were still faster than their predecessors at the the previous DX generation games in the vast majority of situations with rare exception.
 
Having said all that TGGA, whats your thoughts, or bets, as to what the devs actually do? I mean, sure the new cards will be killer as you said, but to me at least, it seems were already somewhat saturated with power/perf/ability , esp at the highend, regarding almost any current game today. Does, or wont that help compel the devs to actually eye candy it up, and not dress for each other? IOW, like all those worthless pebbles, but something more worthy at the user end?
 
Look at it this way: Much like every other new hardware, there will be time it takes to be supported. For one, anything already in development won't be using DX11. And the few exceptions will only use a few features (A la DX10 Crysis) instead of the whole package.

Secondly, devs code to the midrange; the idea being that medium settings (1280x1024, medium settings for everything, 0x AA) will run on most mid-range, and even low-range setups. Usuaully, beyond those settings is where the performance loss is seen. In short: Even if the cards double performance, that does not equate to double the goddies in game; there will be lag time before games are coded assuming that level of power.

You also have the XP factor; XP, Vista, and 7 can run DX9, but only Vista (assumed) and 7 can run DX10+. Which platform leads to more in sales? I'll say it again: Until XP falls below 20% market share, DX9 will be the industry standard, and everything else will just be icing on the cake.
 
Any current DX10 game can be easily ported to DX11. If certains devs/companies want to just lie back and enjoy their DX9ness, they may be soon looking for more work, as DX11 will provide all that DX10 was supposed to, and much much more. Eyecandy,MT, better use of cpu/gpu blend in a game, with it all being easier to do, thus product out the door faster, with more visual effects, and higher fps at same demands. To me, its a no brainer, but Id like more opinions

Guess what? Under DX11, the midrange just got lots better. Go from that perspective first
 


Again, making some assumptions. One, you ignore any performance loss of using the advanced shader effects and tesselation. Secondly, the fact that lots of people still use XP (DX9), where these features mean nothing.

And again, I argue the whole CPU thing is way overblown, as the CPU is not structured in such a way as to be a profecient Renderer, and thats before you take into account everything else the CPU does in the background.
 
No more assumptions than you. For every positive I just said, you have a negative. Im not ignoring anything. Currently, porting a game over will more than likely just mean MORE fps, as we already see in DX10.1 vs DX10.
As I said, if you just want to "sit it out" as a dev, with LRB waiting in the wings, and ATI ready, 2 operating systems, more and more people on DX10 capable each day, hell, even IGPs are DX10 capable, I dont really see where you think DX9 will be holding it back, and as for tesselation, and higher advanced shading, and its costs, I havnt seen ATI's, nVidias or Intels HW, so neither of us know how capable the HW is, let alone all the benefits and costs in the SW, be it DX11 or the game itself.
And, again, as i said, porting the current games over are easy, and most new games are DX10 or better, and there wont be any DX9's that I know of, that are worth note. And, since porting wont take advantage of resselation, and a few other things, it will DX10.1 improvements, as well as others, and with DX 10.1 alone, were seeing over 20% already, now, just add that to the other things being brought by DX11, whether thru porting, or full on
 
^^ Irrelevent. XP has share, meaning its more profitable to code to XP. As long as XP has market share, DX9 will be the standard, and anything else will be just added on. Halo2 aside (M$'s attempt to drive Vista sales), there is no game Vista only (DX10+ only), and thats becaue no developer will knowingly eliminate upwards of 40% of the market for a feature.

As for the 10.1 argument, that main difference is how AA is calculated, so when people crank up the AA levels, you get a bigger speed increase. Theres nothing in the DX11 feature set that does anything simmilar, and I continue to have serious questions about the power needed to perform tesselation. I want to know where you got 20% gains from though...

Developers code to the lowest commonly supported standard. Previously, theres been substantal phase out time for the old OS to lose share. 95 supported DX 8.0a, 98 supported up to 9.0c. See the phase out time that existed? That doesn't exist with Vista, and the split API does nothing but add costs to an industry that does nothing but complain about profits.
 
I'm going to ignore Gamer's comments, because they are of the same misguided view he had in previous threads.
Which has nothing to do with the actual discussion, if he actually stuck to the questions at hand and not made up his own, and also didn't make up stories about implementation under Vista which has been proven wrong many times.



The implementation will definitely come in stages, because it won't just be a question of optimizing similar code (which offers some benefit but not the largest changes) but also a question of re-thinknig how you do things.
Implementing Tesselation involves re-thinking how you're going to make the object, not just saying take this and make is bumpier, same thing with implementing compute shaders.

Now things like memory optimizations like Fetch/Gather4, changes in read/write access & thread handeling will be benefits that can offer nearer term gain, but it will be small increases like 10% boost, and something less likely to show major 'look at me' benefits.

Compute shaders can give them more flexibility, but not in the way we would like, and would make more sense to be used to replace other models (like physics, mapping, AI) which are already separate processes that run in separate threads making them easy to assign to a GPU.

The implementation while fairly easy for some features, isn't the only step, for maximum benefit you have to rethink how you build the game, and replace heavy workloads with lighter ones be they render or compute.
Now adding some features onto existing models is also another method, but then once again you're still going to want to start from a lighter base otherwise you're adding a very gigantic workload which may be done 'efficiently' but would chug, and this would be similar to the tack-on job for early DX10.

That's where you have some developers (especially the smaller ones) taking their split, some who don't push the envelop on visuals have little to gain from implementing the newer features, and will code for the older generation; however others will abandon the older hardware and code for their more blistering edge crowd because that's where their benefit lies. So you will see both, the people supporting the older hardware and extending their market that way, and those supporting the new features and hardware extending their market in that direction. The large development houses will be able to easily straddle the line and not have to make that decision if they don't want to. But it's not like anyone is forced to do either by anything but their own desire to pick and choose their market segments, just like online game devs aren't going to stop making games simply because a new browser comes out. They may or may not change their features depending on if they see any benefit in it for them and their players.

Initially (fall/winter 09/10 releases IMO) you'll see DX11 implemented more as a test of features than as a fundemental part of the game which will come with later launches (Summer-Fall 2010 more likely).

The toughest part will be to explain near-term what's DX10 / 10.1 and DX11, with likely the biggest difference being between DX10 and 10.1 than between 10.1 and 11 initially, with likely alot of time shared between DX10.1 & 11 and DX10 being the base feature set for the group which looks to play to the GF8/9GTX/HD2900 crowd, and then DX10.1+ playing to the more capable hardware. Later you'll see the gulf switch to things like major differences in tesselation, etc. Somewhat similar to DX8.1 and DX9.0C's slow implementation where some early apps like Morrowind (reflective surfaces) and FartCry (heat and water lighting effects [instancing was supported on both hardware which were beyond VS2.0 spec]) showed some basic potential, but the big noticeable split between new and old didn't occur until later games with smoke and then lighting effects.

The most improtant thing will be for both the IHVs and M$ to edjucate the public and developers, and to do more than they did last time to push forward and not waste resources second guessing and treading-water. There's no benefit to either M$ or the IHVs to do that, and no benefit to the hardcore gamers that move the industry forward either.
 
I ain't sure if they are dx10 only or just support it but pretty sure one of them at least is dx10 only.

StormRise is both VISTA and DX10+ only, and that they are not triple A titles, means that it doesn't take the funds of a triple-A house to implement either (Creative Assembly is midsized with the Ttal War series being their biggest title), and speaks against the 'everyone codes to the largest possible market' baffonery of Gk (yeah every game supports software rendering and DOS/Win 3.0-up).

Also Source does support some DX10 features (some lighting features), but far from all which is just not doable with their current models (remembr not only did they decide against DX10 they decided against many SM3.0 features too. There's supposed to be an update to the engine coming, but it's unlikely it will arrive before next Summer.

However Valve has never been at the leading edge of DX10+ , they've been anti-DX10 since the Vista split; which is different than people like Epic and Crytek who prefered the split to draw a line in the sand. There will always be that split between bleeding edge and legacy; which is similar to the console/PC split, which has just as many challenges and hurdles.