Intel's Xe Graphics Architecture to Support Hardware-Accelerated Ray Tracing

(Image credit: Intel)

Intel published a news byte today outlining its announcements at the FMX graphics trade show taking place in Germany this Week. It includes the tasty tidbit that the company's forthcoming data center specific Xe graphics architecture will support hardware-based ray tracing acceleration.

From the blog:

I’m pleased to share today that the Intel Xe architecture roadmap for data center optimized rendering includes ray tracing hardware acceleration support for the Intel Rendering Framework family of APIs and libraries.

As a quick refresher, Intel's Xe graphics architecture is Intel's forthcoming range of low- to high-power graphics solutions. These graphics processors will scale from integrated graphics chips on CPUs, up to discrete mid-range, enthusiast and data center/AI cards. Intel said it will split these graphics solutions into two distinct architectures, with both integrated and discrete graphics cards for the consumer market (client) and discrete cards for the data center. The cards will come wielding the 10nm process and should arrive in 2020.

Support for ray tracing would bring Intel's graphics cards, at least for the data center, up to par with Nvidia's Turing architecture, which largely paved the path to hardware-based ray tracing in the consumer market. Given that this type of functionality is typically embedded at a foundational level in the microarchitecture, Intel's support for ray tracing with data center graphics cards strongly implies the desktop variants could also support the same functionality, though it is noteworthy that the company is splitting its offerings into two distinct architectures. 

Nvidia's Turing offerings also come both with and without ray tracing support, so it is possible that Intel could adopt a similar tiered model that leverages ray tracing as a segmented feature to encourage customers to buy higher-priced models.

Details on Intel's Xe graphics architecture should continue to trickle out over the weeks and months ahead. In the meantime, head over to our Intel Xe Graphics Card feature for the latest details. 

Paul Alcorn
Managing Editor: News and Emerging Tech

Paul Alcorn is the Managing Editor: News and Emerging Tech for Tom's Hardware US. He also writes news and reviews on CPUs, storage, and enterprise hardware.

  • OctaBotGamer
    Yup finally Intel is getting dedicated graphics card upgrade, but it will be a brand new thing but I expect enabling ray tracing will drop the fps to below 10:tearsofjoy: I also think that it'll last only for A very short period of time and will probably be solved after some time, rest assured when the graphics card will be launched
    Reply
  • digitalgriffin
    This is not suprising. Intel demonstrated real time ray tracing with back in 2007. As their team includes one of the fundamental Larabee architectects, it would make sense they include hardware support.

    blfxI1cVOzUView: https://www.youtube.com/watch?v=blfxI1cVOzU

    Intel's original approach was non-ASIC small CISC cores (Larabee/KC/KL) this sacrifices efficiency for a more flexible approach that can be updated.

    But their implementation is no different than what NVIDIA's or AMD's implementation. Define a bounding box for the ray trace hit test. Extend ray forward from viewer to point space then calculate caustics/IOR, ambient/diffuse, single reflections, shadows etc...

    I don't want to say these are elementary calculations. They aren't. The algorithms have been around for decades now. I used to dive into POVRay source and see how it worked. But the mechanisms for speeding it up with ASIC circuits though have remained out of the spotlight because of the amount of circuitry it demands for BASIC ray hit test and calculations.

    We've come to a point now where adding those circuits might be more beneficial to image quality than say creating more raw FLOPS throughput. (For several technical reasons)

    In the future you will see architectures perform some interesting balancing acts. A lot of this depends on which takes greater hold. Ray Tracing OR VR. You can't do both.

    If I were a betting man, if ray tracing does take off, I would say we would see a new sub standard of chiplet architecture where chiplets are assigned based on rendering viewports on draw calls. One dedicated to left lens, one to right for VR. I've been working on the algorithms to solve the efficiency issues here. At one point when dealing with Z Buffer depths comparisons can you render a background draw with 1 call and duplicate it across both viewports? .5 pixels calculated distance change in the main FOV with a 1.5 max delta at the max common fov edges?
    Reply
  • bit_user
    the Intel Xe architecture roadmap for data center optimized rendering includes ray tracing hardware acceleration support for the Intel Rendering Framework family of APIs and libraries.
    This could actually be taken to mean it's not coming in their first gen products. The fact that they cite their "roadmap", rather than being more specific, could be very deliberate.

    Also, the term "hardware acceleration" can be taken to mean anything that runs on the GPU, itself.

    So, we could actually see first gen products that implement it in software, like what Nvidia recently enabled on some of their GTX 1xxx-series GPUs, followed by true hardware support in the second generation. This progression also ties in with talking about a "roadmap".
    Reply
  • bit_user
    PaulAlcorn said:
    The cards will come wielding the 10nm process and should arrive in 2020
    Are we sure about that?

    I know GPUs run at lower clocks than most CPUs, but their dies are also bigger and tend to burn more power. So, given that their roadmap shows no desktop CPUs @ 10 nm in 2020, and confirms there will be Xe chiplets at 14 nm, I'm a bit skeptical we'll see Xe at 10 nm, in 2020.
    Reply
  • bit_user
    digitalgriffin said:
    In the future you will see architectures perform some interesting balancing acts. A lot of this depends on which takes greater hold. Ray Tracing OR VR. You can't do both.
    Oh, sure you can. You just have to dial back the complexity, accordingly. You won't get fancy effects like global illumination, but the flip side is that ray tracing is a much more natural fit for foveated rendering.

    I actually think their move towards ray-tracing might've even been a bet on VR going more mainstream, by now. Consider that the specs for Turing were probably baked back in 2016, when the VR hype cycle was peaking.
    Reply
  • joeblowsmynose
    I know everyone is thinking about gaming but I'm quite confident Intel is not at this time. "Raytracing" is that cool buzzword that if you don't toss around in some presentations, the crowds will think you are a loser. I was into programming some basic raytracing back in my college days - on a 486 :) -- funny how everyone thinks this is new technology. ...

    I picked apart the game engine initialization and other settings on one of the early S.T.A.L.K.E.R. games and found that the game engine supported raytracing ... this was over ten years ago - the engine developers actually added that feature in (I guess the game engine was named "X-Ray" for a reason). It never worked very well because the hardware couldn't cast enough rays and it messed up the shadows, but I was able to enable it and play with it a bit and get some basic results (at 10fps) ...

    Raytracing has a lot more uses than just making reflections in games look prettier, it can be used for all sorts of simulations and complex mathematical calculations, if Intel is adding this at the hardware level then it is these capabilities for the datacenters that they are after. Why market a video card for $500 when you can sell one for $5000 to a data center? AMD already tried to convert their compute tech into a gaming device ... it's pretty meh, and bloated as a gaming card, (vega) but I am sure AMD is smiling at their massive margins on the MIxx cards.

    Every CPU and GPU out there can calculate raytracing - I have been using raytracing for well over a decade to render photorealistic images, the only question is, can it be done fast enough to be useful? ... that's where having dedicated hardware support as opposed to software can come in handy ... but by sometime in 2020, Intel may be showing up late to the party in regards to offering raytracing features.
    Reply
  • bit_user
    joeblowsmynose said:
    "Raytracing" is that cool buzzword that if you don't toss around in some presentations, the crowds will think you are a loser. I was into programming some basic raytracing back in my college days - on a 486 :) -- funny how everyone thinks this is new technology. ...
    Well, in many of their presentations on RTX, Nvidia has been giving a brief history lesson on ray tracing. Here's the official launch video:

    Mrixi27G9yMView: https://www.youtube.com/watch?v=Mrixi27G9yM

    In fact, they even made a real life replica of one of the first ray traced scenes - the one with the translucent and reflective spheres over the red-and-yellow checkerboard, from 1979. Just search for "turner whitted nvidia replica" and you can find photos of people posing in it.

    BTW, I played with POV-ray, back in the day. Luckily, I had a math coprocessor for my 386. I modeled a few scenes with graph paper and a text editor.
    Reply
  • renz496
    bit_user said:
    This could actually be taken to mean it's not coming in their first gen products. The fact that they cite their "roadmap", rather than being more specific, could be very deliberate.

    Also, the term "hardware acceleration" can be taken to mean anything that runs on the GPU, itself.

    So, we could actually see first gen products that implement it in software, like what Nvidia recently enabled on some of their GTX 1xxx-series GPUs, followed by true hardware support in the second generation. This progression also ties in with talking about a "roadmap".

    i think that "hardware acceleration" will meant specific hardware to calculate RT. i don't think intel want to waste time dealing with software based solution.

    bit_user said:
    Are we sure about that?

    I know GPUs run at lower clocks than most CPUs, but their dies are also bigger and tend to burn more power. So, given that their roadmap shows no desktop CPUs @ 10 nm in 2020, and confirms there will be Xe chiplets at 14 nm, I'm a bit skeptical we'll see Xe at 10 nm, in 2020.

    heard rumors that intel is looking at other foundry to manufacture their GPU.

    joeblowsmynose said:

    AMD already tried to convert their compute tech into a gaming device ... it's pretty meh, and bloated as a gaming card, (vega) but I am sure AMD is smiling at their massive margins on the MIxx cards.

    maybe the margin is a bit bigger on those pro cards. but for AMD the most money still likely come from the gaming GPU market due to volume. we saw how sad it is right now that they have less than 20% market share on gaming discrete GPU but the situation probably a whole lot more cruel on the professional market. maybe they should exit the pro market for GPU and focus entirely on gaming performance in their future design.
    Reply
  • bit_user
    renz496 said:
    i think that "hardware acceleration" will meant specific hardware to calculate RT. i don't think intel want to waste time dealing with software based solution.
    Maybe, but how do you explain them talking about their "roadmap", rather than simply coming out and saying their datacenter Xe GPUs will have it?

    renz496 said:
    maybe the margin is a bit bigger on those pro cards. but for AMD the most money still likely come from the gaming GPU market due to volume. we saw how sad it is right now that they have less than 20% market share on gaming discrete GPU but the situation probably a whole lot more cruel on the professional market. maybe they should exit the pro market for GPU and focus entirely on gaming performance in their future design.
    AMD needs the cloud market, because it's growing while PCs are still on an ever-downward trend. More importantly, AMD is the current supplier for Google's Stadia, and such game streaming markets stand to threaten even their console market.

    So, AMD cannot afford to walk away from cloud, whether it's in GPU-compute, or more conventional graphics workloads. I'll bet they also love to sell datacenter customers on the combination of Epyc + Vega.

    What AMD needs to stop doing is trying to play catchup, with deep learning. Each generation, they implement what Nvidia did last, and Nvidia is already moving on from that. AMD needs to really leap-frog each thing Nvidia does, to ever hope of catching them in deep learning. Unfortunately, Nvidia has been doing so much work on the software part of their solution that the situation is starting to look bleak for AMD.

    But there's also a 3rd area of the server GPU market, which I'll call the "conventional GPU compute" market. These folks just care about memory bandwidth and fp64 throughput. And on that front, AMD really did best Nvidia's current leading solution (V100), and probably at a much lower price. The only question is how long until Nvidia will replace the V100, which I'm guessing will happen this year.
    Reply
  • renz496
    stuff like Stadia AMD does not need the compute portion of the GPU. they just need the GPU to be fast in game rendering. for AMD maybe it is better for them to say good bye altogether to GPGPU. AMD current top compute card is only slightly faster than nvidia tesla V100. even if they are cheaper almost no one really cares about them. Epyc + Vega seems nice but in reality those that opt Epyc instead of Xeon will still going to use nvidia tesla or intel phi. remember when AMD Firepro S9150 was the fastest compute accelerator (be it in FP32 and FP64) in the market? i still remember when they say in 6 to 12 months the top 20 of top500 list will be dominated by AMD Firepro. that never happen. instead most of the client choose to wait for intel and nvidia next gen stuff to come and keep using nvidia aging kepler accelerator for their machine. hence when nvidia coming out with GP100 in 2016 AMD did not the see the rush to replace their S9150. yeah they still get some contract in here and there but really there is nothing major for them. and if i'm not mistaken the last AMD major win was those SANAM super computer that employ tahiti based accelerator. for S9150 i don't think even one machine ever entered the top 500 list. and we know spec wise AMD Hawaii simply destroy nvidia fastest accelerator at the time (GK210 based) be it in raw performance or efficiency.
    Reply