Some of you reading this post may remember Ray-Tracing programs from the 1990's such as Pov-RAY that used your general purpose processor to render 3D scenes described using vector graphics (like VRML). Those old programs rendered 800x600 resolution graphics at about 1 frame every 2 to 3 hours on an old 486 processor.
In the August 2006 issue of Scientific American, there was an article on RPU's: Ray - Trace Processing Units. Specialized hardware\Firmware that executes an optimized Ray-Trace Rendering algorithm fast enough to simulate real-time interaction with 3D scenes at as low as 66 Mhz.
So what is the point and why should anyone care? Ray-Trace based rendering offers Photo-realism, Decreased development time for developers (less hacks to make things look good while running smoothly) and probably most importantly: The algorithm provides automatic culling of hidden objects. While ray tracing is computationally intense, none of the time spent computing is wasted on polygons that get thrown away.
Modern GPU techniques treat all polygons in a scene (at least initially) as equal. Culling algorithms will deduce what can be tossed (normally because an object is sitting behind another object), but this still requires visiting every polygon. When you quadruple the number of polygons in a scene, you also quadruple the number of polygons that are tossed... thereby increasing the amount of time "wasted" by the GPU.
As a result, in order for a GPU to "smoothly" render 2 times as many polygons, it needs to be 4 times as fast. At least this is the understanding that I gained from the article as explained by Gordon Stoll of Intel. This may explain why Quad SLI doesn't improve graphics "that" much.
Modern game engines decrease the impact of culling issues by only making a certain range visible to the player and by abstracting interiors of objects to seperate areas that get loaded on entry. (Thats why you can't look out of a window in a game like Oblivion.. it is a seperate area disconnected from the rest of the gaming world.)
However, if you were to run across a true-to-life modelled object, such as a car with all the engine detail present, it would bring a modern graphics processor to its knees. An RPU on the other hand would handle it just fine since the algorithm already culls all the non-visible parts.
This isn't the first post about this subject. I searched the Internet for RPU
and related terms and found most of the hits in forums such as this one where people like me brought up the subject from the very same Scientific American article.
So why am I re-posting an already talked about subject? Well for one I was hoping if enough posts appear on the subject, maybe someone at Toms Hardware would take note and write a more detailed technology article on the subject. Since Graphics Card makers are talking about integrating physics chips, and RPUs are slow primarily because
of the intense Physics calculations involved, I have to wonder if a GPU with integrated physics might offer RPU features. RPU engines might be a way for Physics Cards to have a visible impact to the gaming experience (finally justifying their cost).
The other reason is to address the impact of polygons on RPUs. I started writing this post with a question to the community: "Does an increase in Polygons affect RPU rendering as much as GPU rending". In writing this post, I did some thinking and some research and I THINK I answered my own question (note the think). I will now state it in hopes that someone will correct me if I am incorrect.
RPU scenes can have mesh based objects just as GPU scenes do. (Mesh = lots of small, flat polygons). If I was to limit my answer to mesh based objects, which is what all modern games use, the answer would be that RPUs are not affected as much as GPUs because of the reasons stated above (Culling has less impact). And if you had enough data and enough time, you could probably map out the exact computational point where RPUs will surpass GPUs due to culling inefficiency. It may be within a year, or may be 10 years. (This was something I was hoping a Toms Hardware article might address).
However, RPUs also support vector based shapes. In my scene, I can say "There is a white sphere at location x,y,z of radius 10". At rendering time, the light rays are computed based on the number of pixels on my screen (resolution). Just as a circle is not truly a circle on a pixel based monitor, the spheres detail (equivalent polygon count) would depend on the resolution of my monitor. Increasing the resolution would therefore have a much more significant impact to performance than it does with modern GPUs and mesh-based models. While these vector based 3D environments would be harsh on RPUs, GPUs wouldn't even touch them.
Here is hoping we see RPU support by a major vendor within the next 2 years.
Nice blurb. Ray-tracing is cool and alot of the work into it is very interesting and seems to be not too far off. Some effects will be harder to produce using raytracing, but once perfected the overall look should be far more realistic as you refer to as well.
I suspect in the near future we'll see more. And while physics seems quite easily done by the VPU, I would agree with the idea that sofar ray-tracing seems somewhat limited in it's possibilities on the graphics chips, unless DX10's interactivity helps improve that. If anything right now I can see a greater need for an RPU than a PPU, but the push factor for that will be killer apps that can sell the idea to the general public.
Also prefer the term VPU (less baggage than GPU), which with all the additions of physics, and audio etc to the graphics core seems more appropriate for a visual aspect. But then again I've always prefered VPU as a vestige from the VIDEO card days.
agree on wanting to see more "coverage" on this subject. Having taken a few classes on graphics back in college, and given what little I know of ray tracing from them. (it was covered quite well) I would say that we are out probably closer to the 10 year mark you mentioned. Would still be nice to see done dynamically and in real time on my home system one day.
Don't forget Volume Rendering too! Ray Tracing, as you describe it, and standard GPU graphics are similar in comparison. No need to mess with any polygon descriptions and no limits on scene complexity for a given resolution. As an analogy to the 2D realm:
Volume Rendering is to Ray Tracing and GPU-graphics
Raster Graphics is to 2D Vector Graphics
There are some volume rendering hardware cards available, out they are pretty expensive now. GPUs on the other hand can perform weaker volume rendering and ray tracing as well.