Sign in with
Sign up | Sign in

Nvidia Demonstrates Interactive Ray-tracing

By - Source: Tom's Hardware US | B 40 comments

SIGGRAPH 2010, the annual computer graphics trade show of the ACM, started is exhibition Tuesday and Nvidia demonstrated at its booth the latest Quadro Fermi graphics cards.

Nvidia demonstrated the new release of the Application Acceleration Engine, AXE 2.0. The new version is optimized to run on the new Quadro Fermi cards, but will run on any G92 or later generation cards. These engines allow for interactive ray-tracing where the user can interact with the ray-traced scene quickly without an extensive wait for re-rendering.

Nvidia’s 'mental images' division also demonstrated its iRay application, which takes the above capabilities and adds physically correct global illumination to the features. Global illumination is critical for correct lighting in most 3D digital content creation and the ability to view and interact with a globally illuminated scene will greatly accelerate the ability of artists to work quickly. In addition iRay supports network distributed rendering across GPUs.

GPU-rendered image from Bunkspeed SHOT using irayGPU-rendered image from Bunkspeed SHOT using iray

Nvidia partners like The Foundry, RTT, and Bunkspeed also demonstrated their applications making use of CUDA-accelerated interactive ray-tracing. RTT’s application actually calculated airflow over the model and rendered the model, interactively, both using CUDA, while Bunkspeed’s SHOT uses mental images' iRay to provide interative ray tracing and global illumination.

In a few GPU generations, these can be expected to proceed from ‘interactive’ to ‘real-time’ and we should start seeing the use of real-time ray-tracing and global illumination in games.

Display 40 Comments.
This thread is closed for comments
Top Comments
  • 20 Hide
    dragonsqrrl , July 29, 2010 12:33 AM
    Rendering scenes using ray-tracing and global illumination takes freakin FOREVER, even on the i7 based workstations at school. I'm amazed at how quickly you're able to preview a scene using the hardware acceleration provided by Nvidia's latest generation of GPU's. It's literally just a matter of seconds based on the videos I've seen.

    This really does make "interactive" ray-tracing possible for the first time on a desktop... awesome. This is the sort of application the Fermi architecture really excels at. The emphasis Nvidia places on this market is probably the main reason I'll be going with a Fermi based solution for my next build, and not an Evergreen.
  • 20 Hide
    Blessedman , July 29, 2010 1:08 AM
    I remember back in the day (late 80's) when Amiga had it's big hay day with ray tracing, there were pundits that said that real time ray tracing would never be a reality. This was back when a single frame would take days to render in a farm. Oh how far we have come.
  • 18 Hide
    JonathanDeane , July 29, 2010 12:53 AM
    So were probably less the 10 years off from being able to have games based on real time ray tracing, this makes me happy for some reason.
Other Comments
  • 14 Hide
    makotech222 , July 29, 2010 12:29 AM
    Yeah i was amazed at the viper pic, sooo realistic!
  • 20 Hide
    dragonsqrrl , July 29, 2010 12:33 AM
    Rendering scenes using ray-tracing and global illumination takes freakin FOREVER, even on the i7 based workstations at school. I'm amazed at how quickly you're able to preview a scene using the hardware acceleration provided by Nvidia's latest generation of GPU's. It's literally just a matter of seconds based on the videos I've seen.

    This really does make "interactive" ray-tracing possible for the first time on a desktop... awesome. This is the sort of application the Fermi architecture really excels at. The emphasis Nvidia places on this market is probably the main reason I'll be going with a Fermi based solution for my next build, and not an Evergreen.
  • 18 Hide
    JonathanDeane , July 29, 2010 12:53 AM
    So were probably less the 10 years off from being able to have games based on real time ray tracing, this makes me happy for some reason.
  • 20 Hide
    Blessedman , July 29, 2010 1:08 AM
    I remember back in the day (late 80's) when Amiga had it's big hay day with ray tracing, there were pundits that said that real time ray tracing would never be a reality. This was back when a single frame would take days to render in a farm. Oh how far we have come.
  • 2 Hide
    crashmer , July 29, 2010 1:25 AM
    Great you found my viper! thanks
  • 0 Hide
    jsm6746 , July 29, 2010 1:29 AM
    whatever happened to openrt... the last news post on the site is from '07...
  • 0 Hide
    Cons29 , July 29, 2010 1:39 AM
    i gave Maya a try before, fun, but needs time to learn. and yes it takes long to render a realistic scene, add that to my not-so-good settings being a noob and all :) 
    this is good news. so are there 3d modeling applications that makes use of the gpu? maya/3ds?
  • 3 Hide
    Draven35 , July 29, 2010 1:42 AM
    The F-18 frame is one frame while someone was interacting with it. The image refines as it rests. The Bunkspeed shot of the viper probably took a minute, maybe two. Before iray 'refines' the image, you can still get an idea for how the lighting and GI will look, well enough to make lighting decisions.
  • 6 Hide
    matt87_50 , July 29, 2010 2:10 AM
    dragonsqrrlRendering scenes using ray-tracing and global illumination takes freakin FOREVER, even on the i7 based workstations at school. I'm amazed at how quickly you're able to preview a scene using the hardware acceleration provided by Nvidia's latest generation of GPU's. It's literally just a matter of seconds based on the videos I've seen. This really does make "interactive" ray-tracing possible for the first time on a desktop... awesome. This is the sort of application the Fermi architecture really excels at. The emphasis Nvidia places on this market is probably the main reason I'll be going with a Fermi based solution for my next build, and not an Evergreen.


    i7 970 extreme (the old quad core one) = 48 Gflops, these video cards = 1000 to 2000 Gflops.

    pwned.



    any demos of real time raytracing? (20, 30fps, reasonable res ect)

    obviously images like the above which look REAL aren't gonna be real time yet, but some form of awesome raytracing might be.
  • 5 Hide
    matt87_50 , July 29, 2010 2:16 AM
    hey, that viper pic: is it just the car that is rendered, then composited with a picture? or is the whole terrain rendered too? I could scarcely believe that...
  • 0 Hide
    mr_tuel , July 29, 2010 3:17 AM
    I think I now know what my next GPU upgrade might have...
  • 6 Hide
    lukeeu , July 29, 2010 3:28 AM
    matt87_50i7 970 extreme (the old quad core one) = 48 Gflops, these video cards = 1000 to 2000 Gflops.pwned.any demos of real time raytracing? (20, 30fps, reasonable res ect)obviously images like the above which look REAL aren't gonna be real time yet, but some form of awesome raytracing might be.

    The problem with ray tracing isn't computational power. It's memory access. Every reflection takes only ~10 arithmetic operations to compute and then you need to find next intersection ant this is where problem starts.For a 1m triangle scene you'll have to do at least 30 reads from your data structure or even 1000s in worse case scenarios. Every read from RAM on a graphics card takes hundreds of clock cycles also the reads are random and not sequential so it won't hit very small caches and you get x16 RAM slowdown for misalignment. On a CPU you get megabytes of cache and no memory access restrictions. These algorithms work in O(n) and O(n lg n) times so you need to feed about as much data to the GPU as it can compute so for a 1Tflop chip you should connect memory that can do 1Tflop * 4 bytes = 4TB/s memory @ random reads... this can be only done with on-chip cache and for a 1m triangle scene you will need ~ 1000000triangles * 3 vertexes * 12bytes/vertex + 1 to 8 million nodes of data structure * (8 children + 1 parent) *4 bytes = way over 60MB of cache and this probably will have to be accessed by hundreds of PUs simultaneously... so it should be kept in dozens of copies.
  • 1 Hide
    Pyroflea , July 29, 2010 3:45 AM
    Very cool, nice to see some new breakthroughs; they seem to be few and far between lately.
  • 0 Hide
    matt87_50 , July 29, 2010 3:48 AM
    lukeeuThe problem with ray tracing isn't computational power. It's memory access. Every reflection takes only ~10 arithmetic operations to compute and then you need to find next intersection ant this is where problem starts.For a 1m triangle scene you'll have to do at least 30 reads from your data structure or even 1000s in worse case scenarios. Every read from RAM on a graphics card takes hundreds of clock cycles also the reads are random and not sequential so it won't hit very small caches and you get x16 RAM slowdown for misalignment. On a CPU you get megabytes of cache and no memory access restrictions. These algorithms work in O(n) and O(n lg n) times so you need to feed about as much data to the GPU as it can compute so for a 1Tflop chip you should connect memory that can do 1Tflop * 4 bytes = 4TB/s memory @ random reads... this can be only done with on-chip cache and for a 1m triangle scene you will need ~ 1000000triangles * 3 vertexes * 12bytes/vertex + 1 to 8 million nodes of data structure * (8 children + 1 parent) *4 bytes = way over 60MB of cache and this probably will have to be accessed by hundreds of PUs simultaneously... so it should be kept in dozens of copies.


    very true

    thats why triangles are lame. they are for lame rasterizers. all shapes in raytracing should be formed from much more complex geometric objects :D 

    for instance: terrain: should just a the equation for the plasma fractal you would otherwise use to generate a hight map. and bam! like 10s of bytes for your whole terrain! pwned!

    I mean, the ray collision equation might be a *bit* more complex.... but Tflops!!

    like how they are trying to overcome the same memory limitations in realtime rendering with tessellation.
  • -1 Hide
    lukeeu , July 29, 2010 3:49 AM
    weirdguy99That image of the Viper is bloody amazing.
    Viper looks HORRIBLE for ray tracing!
    -No car shadow reflected in body
    -No road and shadow reflections on the rims
    -No reflection of the mirror in the windows
    -No reflection of the scenery in the glass
    -No double or triple reflections
    Only reflections that wouldn't be done in a super easy traditional way:
    -Mirror in the bodywork (but could be done with some effort)
    -Scenery reflection in the lamp shadowed by the lamp (could be done)
    -Shadow on the brakes (could use a dedicated shadow here)

    Only thing that looks great here is the scenery but it's only because of the detail not ray tracing. It could be rendered using Oblivion level shaders..
  • 3 Hide
    gpace , July 29, 2010 3:52 AM
    I can't wait till technology allows this to be real time. I'd also like a side of realistically destructible environment and a engaging story.

    Good job Nvidia and I hope ATI is working on something like this too. Nothing like some good competition to push technology forward.
  • 0 Hide
    fonzy , July 29, 2010 4:10 AM
    gpaceI can't wait till technology allows this to be real time. I'd also like a side of realistically destructible environment and a engaging story.Good job Nvidia and I hope ATI is working on something like this too. Nothing like some good competition to push technology forward.


    I hope so to,ATI's next set of cards are do out later this year hopefully they have something that can beat Nvidia.
  • 0 Hide
    miloo , July 29, 2010 4:12 AM
    that's pretty stunning graphics mate~
    good work
  • 3 Hide
    chickenhoagie , July 29, 2010 4:22 AM
    woah..i thought that viper picture was seriously a real picture!
  • 4 Hide
    jednx01 , July 29, 2010 4:40 AM
    Wow... I have to admit that this is one of the most impressive jobs of computer generated images that I've ever seen. I seriously can't tell that the picture of the viper isn't a real photo... :o 
Display more comments