Sign in with
Sign up | Sign in

Limitations Of Ray Tracing

When Will Ray Tracing Replace Rasterization?
By

Now that we've made a point of deflating certain myths associated with ray tracing, let's look at the real issues that the technique involves.

We'll start with the major problem associated with the rendering algorithm: its slowness. Of course, there are those who'll say that that's not really a problem, since, after all, ray tracing is highly parallelizable and with the number of processor cores increasing each year, we should see nearly linear increases in ray tracing performance. And what's more, research on the optimizations that can be applied to ray tracing is still in its infancy. When you look back at the earliest 3D cards and compare them to what's available today, you might tend to be optimistic.

However, that point of view misses an essential point: the real interest of ray tracing lies in the secondary rays. In practice, visibility calculation using primary rays doesn't really represent any improvement in image quality over a classic Z-buffer algorithm. But the problem with these secondary rays is that they have absolutely no coherence. From one pixel to another, completely different data can be accessed, which cancels out all the usual caching techniques that are essential for good performance. That means that the calculation of secondary rays becomes extremely dependent on the memory subsystem, and in particular on latency. This is the worst possible scenario, because of all memory characteristics, latency is the one that has made the least progress in recent years, and there's no indication that's likely to change any time soon. It's easy enough to increase bandwidth by using several chips in parallel, whereas latency is inherent in the way memory functions.

On a graphics card, latency decreases much more slowly than bandwidth increases. When the latter improves by a factor of 10, latency improves concurrently only by a factor of twoOn a graphics card, latency decreases much more slowly than bandwidth increases. When the latter improves by a factor of 10, latency improves concurrently only by a factor of two

The reason for the success of GPUs is that building hardware dedicated to rasterization was an extremely effective solution. With rasterization, memory access is coherent, regardless of whether it involves access to pixels, texels, or vertices. So, small caches coupled with massive bandwidth were ideal for achieving excellent performance. Bandwidth is computationally expensive, but at least it's a feasible solution if the economics justify it. Conversely, there just aren't any solutions for accelerating memory access with secondary rays. That's one reason why ray tracing will never be as efficient as rasterization. 

Another intrinsic problem with ray tracing has to do with anti-aliasing (AA). The rays being shot are in fact simple mathematical abstractions and have no actual size. Consequently, the test for intersection with a triangle returns a simple Boolean result, but it provides no details, such as "40% of the ray intersects this triangle." The direct consequence of that is aliasing.

Whereas with rasterization it was possible to dissociate shader frequency from sampling frequency, it's not that simple with ray tracing. Several techniques have been studied to try to solve this problem, such as beam tracing and cone tracing, which give the rays thickness, but their complexity has held them back. So the only technique that can get good results is to shoot more rays than there are pixels, which amounts to supersampling (rendering at a higher resolution). Needless to say, that technique is much more computationally-expensive than the multisampling used by current GPUs.

Display all 66 comments.
This thread is closed for comments
Top Comments
  • 12 Hide
    stray_gator , July 22, 2009 8:28 AM
    aargh. start typing, then sign in to find your first words posted.
    Anyway, what I liked about this article is its being under the hood, but not related to a new product, announcement or such.
    "deep tech" articles accompanying product launches tend inevitably to follow the lines of press kits, PR slides, etc.
    Articles like this, while take longer to research, are exactly that - they are researched rather than detailing "company X implemented techniques Y and Z in their new product, which works this way, benefits performance that way and is really cool.". it gives an independent, comprehensive view of the subject, and gives the reader real understanding in the field.
Other Comments
  • -1 Hide
    IzzyCraft , July 22, 2009 6:55 AM
    Greed? You give an inch they take a mile? Very pessimistic conclusion although it helps drive the industry so hard to really complain. ;) 
  • 8 Hide
    Ramar , July 22, 2009 7:33 AM
    I'm definitely the kind of person that would prefer to lose some performance in exchange for elegance and perfection. The eye can tell when something is done cheaply in a render. I've made this argument [something most people don't even begin to grasp] that quite often we find computationally cheap methods of doing something in a game, and after time it seems to me that we've got a 400 horsepower muscle car that, on close inspection, is held together with duct tape and dreams. I'd much rather have a V6 sedan that's spotless and responds properly.

    Okay, well in real life, the Half Life 2 buggy would be a lot cooler to drive around than a Jetta, but you get the analogy.
  • 0 Hide
    stray_gator , July 22, 2009 8:16 AM
    Great article!
  • -1 Hide
    zodiacfml , July 22, 2009 8:26 AM
    i still like the simplicity of ray tracing and how close it is to physics/science. it is just how it works, bounce light to everything.

    there are a lot of diminishing returns i can see in the future, some are, how complex can rasterization can get? what is the diminishing returns for image resolution especially on the desktop/living room?
    ray tracing has a lot of room for optimization.

    for years to come, indeed, raster is good for what is possible in hardware. look further ahead,more than 5 years, we'll have hardware fast enough and efficient algorithm for ray tracing. not to mention the big cpu companies, amd & intel, who will push this and earn everyones money.
  • 12 Hide
    stray_gator , July 22, 2009 8:28 AM
    aargh. start typing, then sign in to find your first words posted.
    Anyway, what I liked about this article is its being under the hood, but not related to a new product, announcement or such.
    "deep tech" articles accompanying product launches tend inevitably to follow the lines of press kits, PR slides, etc.
    Articles like this, while take longer to research, are exactly that - they are researched rather than detailing "company X implemented techniques Y and Z in their new product, which works this way, benefits performance that way and is really cool.". it gives an independent, comprehensive view of the subject, and gives the reader real understanding in the field.
  • -1 Hide
    enewmen , July 22, 2009 8:31 AM
    The ray-tracing code on the business card was way cool. I was hoping (real-time)ray-tracing and photo-realistic rendering will come with DX11 and GPGPU offloading - this seems completely unrealistic.
    I still never read of any dedicated ray-tracing hardware, at any price. It seems the better we understand ray-tracing and it's limitations, the more cloudy the future becomes.
  • -1 Hide
    shurcooL , July 22, 2009 9:05 AM
    Nice article. Seems to be fairly accurate.
  • -1 Hide
    LORD_ORION , July 22, 2009 10:59 AM
    Ray tracing will inevtiably replace rasterization. It will just flat out look better to the human perception, when in motion, than pure rasterization, and that is all that is required.

    Heh... this article brought to you by Nvidia.
  • -1 Hide
    annymmo , July 22, 2009 12:10 PM
    Hopefully GPGPU (OpenCL)
    will make raytracing possible.
    (Together with a huge number of processing cores per graphic card and an advanced raytracing algorithm.)
  • 1 Hide
    Inneandar , July 22, 2009 12:24 PM
    nice article.
    I wouldn't mind having just a little bit more technical depth, but I'd be glad to seem more like this on Tom's.
  • 0 Hide
    Anonymous , July 22, 2009 12:39 PM
    this article brought to you by nvidia's ministry of propaganda.
    if nvidia wants to survive it must adapt and evolve. It's silly trying to persuade people about how bad raytracing is just because you're a dinosaur and don't want to acquire new know-how. Nevertheless even if nvidia is not willing to do it, there are already others who are filling the gaps.
  • -1 Hide
    hannibal , July 22, 2009 1:29 PM
    Ok, so now with some hefty computer cluster you can render one frame in 6 hours, so it will take one day to render 4 frames. 24 frames per/s are needed, so it takes 6 days to render one second of moving picture...
    Yep we will see real time ray trasing in games in something like 20 years? (Douple the speed of computer in each year) It takes something like 15 years to calculate one frame in 0.6 second (for movie company computers) and 4-5 year more to make it 24 frames per second... If the mores law keep on kicking. For home you can expect speed like that in 5 more years? lets say 10. So summa summarum we have high guality tray trasing games 30 years from now!
    Well ofcourse Pixar has much higher need for guality, so less is needed for gaming.
    In any way nice article! And in real life some sort of tray trasing can be seen sooner, but photorealistic computing is still far far away... pity I will be in pension or dead before I see it...
  • -1 Hide
    JAYDEEJOHN , July 22, 2009 1:30 PM
    Lets face it. What do we have today> Current cards using rasterization playing much more lifelike games on much larger monitors. The closer we get to "itll play Crysis", the more the boundaries move, and puts it just that much closer to Ray Tracing.
    Great job Fredi, and tho some will deny what its going to take to get RT RT, you painted it as well as Ive seen. As for more in depth,if the article was too finely explained, the overall picture may have been lost, as seen by some comments.
    I cant find the link I posted awhile back in the forums about Lexus? having a full time raytracer for their designing, but its still slow, and requires over 320 cores which are designed for this kind of work, not just a simple x86 cpu, so yea, we are aways off before anything real happens.
    Once again, excellent article
  • -1 Hide
    downer88 , July 22, 2009 2:06 PM
    Wouldn't real time ray tracing need many many more CPU cores than the four barely used today, and would get rid of the graphics card? If so, its too big a leap for anytime soon.
  • -1 Hide
    TwoDigital , July 22, 2009 2:15 PM
    I won't go quite so far out as Hannibal... keep in mind that Pixar is largely these days rendering for imax-quality images (~12,000 x 8700.) It may indeed take 20 to 30 years before you're playing Crysis on a desktop monitor that's that dense. In the mean time, you will see raytracing come to desktop games (so long as people keep asking for it) more in a 1920 x 1080 version with low quality settings at first for your higher framerates.
  • 0 Hide
    gamerk316 , July 22, 2009 2:52 PM
    More or less what I figured. Ray tracing has its benifits, but I was always a bit concerned at the data structures and how they were designed. The fact is, regardless of how much better it works, if its too hard to manage to code without clear and visable benifits, then devs won't use it.

    Rasterization is still the better method. Besides, a decade ago, Doom3 proved you could do dynamic shadows in rasterization, which skeptics thought was too costly to perform (or downright impossible). Reflections will eventually follow.
  • -1 Hide
    Parrdacc , July 22, 2009 3:37 PM
    Awesome article. Really enjoyed reading it. However, based on current technology, well the type that us regular joe's can afford, I do not see this as being very economical for companies. That and based on my limited understanding; the human eye can only see, or should i say distinguish, so much as it is to begin with (color hues and whatnot)that it would not make a whole lot of sense to go to far with this as at a cetain point it would not make a difference to our senses anyway.

    Add on top of that the processing power needed to reach such levels at this time is just not economically smart. In time when average people can afford a system capable of rendering such games then it would make sense but only to the point in which our senses can actually distinguish whats on the screen.
  • -1 Hide
    Anonymous , July 22, 2009 3:39 PM
    Lighting effects makes all the difference. If the lighting and shadows are not convincing to the eye isn't "fooled" and the scene isn't convincing.
    Think of Film Noir and the very effective use of darkness an shadows. What you don't see contrasts what you do.
    Remember the brighter the light source the DARKER the shadow.
    If you are in bright sunlight (Fallout3) the shadows casts by objects and characters should be BLACK to you. This is because your iris is closed because of the sunlight. IT seem that something so simple is hard to pull off with rasterized rendering.
  • 0 Hide
    megamanx00 , July 22, 2009 3:39 PM
    Perhaps one day, but not anytime soon. Despite what Intel says, Larrabee won't do it either. I don't expect to see them ray tracing Crysis anytime soon.
  • 2 Hide
    thiswillkillthat , July 22, 2009 4:11 PM
    The thing is that the standard you hold an image to is also dependent on your standards. I work with physically accurate rendering programs on a nearly daily basis for the purpose of creating architectural visualizations. To my eyes, rasterization looks like crap. Raytracing is an improvement, but still hardly ideal. People aren't used to the quality of raytracing, let alone metropolis light transport, so they're happy with rasterization. If ray tracing were the standard, rasterized images would be considered to be subpar.
Display more comments