Chris: Exciting, okay cool. Obviously you have a direct line to these hardware vendors. What do you want to see in the next generation of GPUs that’ll make your job easier?
Johan: That's a fun question. We have a pretty long list of all the stuff we typically go through and then talk to them with, but one very concrete thing we’d like to see, and actually Intel has already done this on their hardware, they call it PixelSync, which is their method of synchronizing the graphics pipeline in a very efficient way on a per-pixel basis.You can do a lot of cool techniques with it such as order-independent transparency for hair rendering or for foliage rendering. And they can do programmable blending where you want to have full control over the blending instead of using the fixed-function units in the GPU. There’s a lot of cool components that can be enabled by such a programability primitive there and I would like to see AMD and Nvidia implement something similar to that as well. It's also very power-efficient and efficient overall on Intel's hardware, so I guess the challenge for Nvidia and AMD would be if they were able to do that efficiently because they have a lot of the different architectures there. So that's one thing. What other components do we have? Usually when the architects are over we have these meetings of just sitting and talking for 14, 15 hours, or an entire day about everything.
Chris: That'd be a fun conversation to sit in on.
Johan: Yeah, it's really fun. We want to enable, and I mentioned a bit about it last week during my talk last week about Mantle, was enabling the GPU to execute in a little bit more of a heterogeneous fashion of being able to run multiple compute shaders in parallel with your graphics work and ideally having more collaboration between the CPU and GPU. We can do things like that on the console because they're integrated machines, so the CPU and GPU are on the same die. On the PC you are seeing it more and more with the APUs and Intel's Ultrabooks that also have integrated graphics.
I want to see more of this type of collaboration between CPU and GPU to drive many more advanced rendering techniques. For example once we've rendered the Z-buffer for a scene then we know the depth of every single pixel that we have in that our frustum and based on that information we can actually do things like shadow maps that are adapted specifically only to cover the area that they actually need to. Typically you don’t really have that knowledge and on the CPU you prepare that data that the GPU will render a few frames later, so you have to brute force a lot of things. You have to send out a lot of work and you can't really be reactive. With many of the things that we can do with Mantle and I think going forward also with closer CPU and GPU interaction in general we can do a lot more clever techniques and less brute force type of techniques as well. That's a pretty frequent topic the when we talk with architects.
Chris: Sure, so I also want to know about features that'll make the biggest difference to realism, but in my previous question I was talking about features that'd make your job easier. So as a follow-up to that one, are there different features you want to see that'll improve the experience an end-user has when they play your games from the perspective of realism?
Johan: Yeah, so realism. I think that there's a few things, well I guess it goes in both categories. Another thing that I haven’t mentioned yet is that Nvidia has been doing a lot of good work with nested data parallelism or I think they call it dynamic parallelism in their big Kepler cores where you actually run compute work that is sort of nested and can interact in very interesting ways. That enables a lot of other programability mechanisms with it and nice performance.
For realism specifically, we are having some challenges in general going forward because there are so many rendering techniques that we implement through just standard rasterization and post-processes. These things will sort of start to break down more and more as we have more and more complex scenes, and we want to have more transparent surfaces in those. Doing just the standard rasterization and then trying to do depth of field and motion blur correctly on those, but doing them as post-processes is very, very limited. Let’s say you have a transparency and you may have two or three layers of windows together with some particle effects there, and then you want to do depth of field on that scene afterwards, but you only have your depth buffer. It doesn’t know about these transparent surfaces, or if it knows about them it doesn’t know what's behind them.
I think a challenge there is figuring out what is a good, efficient future graphics pipeline both for mobile, which maybe have different constraints, but also for depth stuff, because the rasterization pipeline is really quite efficient, but has its limitations there. There’s various other types of alternatives such as micro triangle or micro polygon rasterizers or stochastic rasterization where you sort of can start in with depth of field and motion blur, and they can be more of an integral part of your rendering. This of course has a lot of other potential drawbacks or difficulties in how these things interact, but you sort of get to the point where more of these techniques can more freely interact. I think that can really bring on a lot more extra realism. At least I'm talking from purely what the GPU vendors and we can do together on that.
There's a lot stuff that we can do just with our engine also, and that we are doing going forward. Things like more physically based rendering and shading where we try and use more real life measurements of light sources and materials and try to represent those accurately within the game. Typically, in previous games and engines, we sort of look at the reference and then you try to recreate that, but you don’t really measure, you don’t know the ranges. There's nothing really to truly compare it with, so the type of games we're doing now with very big, complex content and levels of game play, it gets more important to be able to have that reference; the frame of reference of something that's real and trying to recreate that.
- Chris Angelini And Johan Andersson Talk Battlefield 4 And Frostbite 3
- The Future Of Hardware And Game Realism
- Preparing For A World Of VR And HMDs
- The Future Of Frostbite: More Streamlined Development
- Parallelism And Graphics On Next-Gen Consoles
- Fully Utilizing Next-Gen Console Hardware
- What Does A World With Many Low-Level APIs Look Like?
- What Are The Chances That AMD Shares Mantle With Anyone?