Interview – Lots of graphics tech tradeshows are on the calendar for the coming weeks - Siggraph 2008 in Anaheim, IDF Fall in San Francisco, Games Convention in Leipzig and Nvision 08 in San Jose. Intel will play a big role in three of those shows and with Larrabee out and Nehalem almost ready for showtime, we felt it was time to get an update on Intel’s ray-tracing efforts.
There is no better person at Intel to chat about ray-tracing than Daniel Pohl, an engineer who is making some impressive progress in ray-tracing research – a graphics technology that is likely to heavily impact the way graphics and animations are created in the future.
TG Daily: Intel finally disclosed a few details about Larrabee earlier this week. What is Larrabee from your perspective. What is the underlying architecture and the programming model?
Daniel Pohl: Larrabee was primarily built as a rasterizing processor. Therefore you have support for DirectX and OpenGL. But it will also be a freely programmable x86-architecture. That means you could, for example, write your own rasterizer with your own API, a ray tracer, a voxel renderer or combination of those. You could also use it for non-graphical applications that benefit from parallelization.
TG Daily: How did you get the API to enable ray-tracing? What API is Intel using to showcase ray tracing demos?
Daniel Pohl: Some of the important features of a graphics API are to be able to setup your geometry, define material properties, camera values and a lighting environment. We wrote our own API for that and tuned that through several internal revisions. The shading system uses a HLSL-like syntax that allows you also to shoot new rays within a shader. With that capability, special effects like glass and mirrors become very easy tasks. Everything you see in the Quake Wars: Ray Traced demo is done over that API. Using that API the programmer has no need to manually multi-thread the rendering and does not need to optimize the shading with SSE as this is done by the shading compiler automatically. Of course there always people who want to go really low-level to squeeze out every last percent of performance. For that case, our API can also be bypassed to directly access all internal structures. Let me make sure here that we are doing this for research reasons, so it might be that this API never sees the public light.
TG Daily: There seems to be a lot of talk about ray-tracing in the media and all of the advantages it can bring. Is Ray-tracing the holy grail of rendering?
Daniel Pohl: Personally, I’ve been researching real-time ray-tracing for games since 2004. Back then on Quake 3: Ray Traced (www.q3rt.de). I figured out quite fast that there are a lot of benefits in using that technology. For example, for reflections and refractions you are simulating rays exactly like in nature. You get high quality results in an easy way. Looking several years ahead, I can’t imagine that game developers still want to use the approximation for those effect that are currently used.
In general, the usability of ray-tracing for games depends also on what speed-ups we can expect from researching faster algorithms and how hardware will evolve.
TG Daily: There seems to be a prejudice that ray-tracing can only be used for scene rendering... what are actual purposes that RT can be used?
Daniel Pohl: Once you have the possibility to shoot a ray effectively through a dynamic scene you can use that for even more. In my master thesis in 2006 at the University Erlangen & Saarland, I used the same rays as used for rendering to do collision detection. Once you shoot many million rays per image, the amount of collision detection rays is almost for free. That also means you don’t need to keep separate rendering and collision detection structures in the memory. You can use the same.
Another usage is for AI. In order to detect visibility from one player to another you can use rays to detect blocking objects. You can also use the visibility determination for path finding. It is interesting to note that the usage of rays for collision detection and AI is already used in some current games. But this is sometimes not done against the complete dynamic scene or it is done with only a few rays. From talking to game developers, we already got the feedback that they would also like to be able to spend more rays on those queries.
TG Daily: What kind of ray-tracing research has to be done before ray-tracing becomes interesting for commercial game development.
Daniel Pohl: Creating higher image quality even faster. That requires smart anti-aliasing algorithms, a level of detail mechanism without switching artifacts, particle systems that also work in reflections, a fast soft shadowing algorithm, adoption to upcoming hardware architectures. We have some topics to keep us busy. You can see more about our work at Siggraph in a talk from me at the Intel booth theater on Tuesday August 12 and 14. We will also have our ray tracing booth at IDF in San Francisco from August 19-21.
TG Daily: Back in June at Research@Intel Day, you used a Caneland platform system with a 16-core Tigerton setup to demonstrate Ray-traced ET: Quake Wars. That was quite a powerful system – any reason why Tigerton was the hardware of choice?
Daniel Pohl: Our ray-tracing research group is part of Intel’s Tera-Scale project. There we target future architectures that consists out of tens, hundreds and even thousands of cores. The Caneland platform with a total of 16 cores in a single machine gives us a good glimpse at what we can expect in future desktop systems. It allows us to study threading behavior and optimize for parallelization. So far we encountered almost linear scaling of the frame rate with the number of cores. The more cores, the better for ray-tracing!
TG Daily: We noticed significant performance differences between the demos - while the Quake 4 RT code ran at 90 fps and above, Quake Wars dipped to 14-29 fps. What is the reason for this performance penalty?
Daniel Pohl: Besides the fact that Quake Wars is a newer and therefore more complex game, we also covered a lot more special effects. We have accurate shadows, reflections, glass and a really cool looking water implementation. Besides that, we have now better texture filtering and a freely programmable shading system with a HLSL-like syntax. Also, we showed a decent amount of dynamic objects. Dynamic objects have often been considered to be a bottleneck in a ray tracing environment due to the need to update acceleration structures. You might see more about this topic soon ...
This year, Intel will promote graphics at three consecutive tradeshows. For a company that has not been able to produce decent drivers for their integrated products, this kind of progress in the discrete segment is encouraging.
It is too early to say what impact Larrabee will have on the market, but the company is investing a sizable amount of money into this project. In fact, Intel is betting the farm on visual computing: The future will bring Larrabee’s in-order cores paired with Intel’s traditional out-of-order cores. Sort of a fusion.
Intel’s research group is pushing ray-tracing development on Intel’s chips, but we should not forget that there are companies such as JulesWorld, a small company from Los Angeles that is offering ray-tracing technology on graphics cards of today, not tomorrow.
We’re living in a very exciting time in history of computing. The Technology advancements that will be achieved in the next five years are likely to shape the future of communications, entertainment and computing.