Intel to Detail Arc Ray-Tracing, XeSS Tech at GDC 2022

Intel graphic for GDC participation
(Image credit: Intel)

Intel will use this year's Games Developers Conference (GDC) to shed light on its upcoming Arc Alchemist family of graphics cards. As part of the GDC schedule, Intel will host multiple sessions related to ray tracing and Xe SuperSampling (XeSS) solutions. A session entitled "A Quick Guide to Intel's Ray-Tracing Hardware" promises to walk users through Intel's implementation, as well as the "how" and the "why" of the company's approach.

While the event descriptor itself doesn't add many details, it's far from empty of interesting tidbits and predictably raises more questions than it answers. Being a GDC presentation, the session will give an overhead technical view of Intel's approach, and it'll explain why the implementation "has been designed with a path-tracing future in mind." 

Path tracing still falls under the ray tracing umbrella term. It is essentially the same as ray tracing but was theorized years later in 1986 by James Kajiya in his paper The Rendering Equation. It was presented as a solution to the limitations of ray tracing as a rendering technique (which was first theorized nearly two decades earlier, in 1968). Intel's mention of path tracing sets up future acceleration in physics-accurate rendering.

Ray tracing shoots rays across the scene and allows them to multiply when encountering an object (according to the objects' physical properties, such as refraction, diffraction, and reflection). In ray tracing, a single ray can generate tens or hundreds of other rays as it hits an object (and those can themselves generate other tens or hundreds of rays). This can result in real performance limitations and increase each additional bounce's rendering workload for a continuously diminishing return in image quality and light simulation. It's estimated that the third level of ray bounces costs around 1,000 times more performance than the first, whilst contributing only around 1% to the final image quality. These bounces are added to the Bounding Volume Hierarchy (BVH) tree, which encompasses the results of all the rays and their respective bounces.

Path tracing, however, imposes a limit on ray generation: each ray can only bounce once, and it does so in a random direction. The random direction is provided by the Monte Carlo algorithm, which helps speed up the ray tracing process - and cuts through the need of exponential bounces - in exchange for a small probability of the rendered output to be less accurate than a fully ray-traced image. Path tracing thus reduces the rendering cost of physically-accurate lighting, allowing for naturally simulating rendering techniques such as soft shadows, depth of field, motion blur, caustics, ambient occlusion, and indirect lighting. 

Path tracing isn't a novel way of approaching ray tracing - it's used on Pixar's RenderMan software, which the company leverages to create its storytelling worlds. Intel's wording implies that path tracing won't immediately be part of the company's ray tracing implementation on Arc Alchemist but indicates that its foundations are already rooted in the seeds of the GPU architecture.

While we've known for a while that Intel's Arc Alchemist would bring ray tracing chops to Intel's graphics architectures, there are several ways to solve the graphics workloads associated with physics-accurate rendering. As we've seen, hardware developers can dedicate differing resources and complexities for ray tracing-specific acceleration hardware. This is part of the reason why Nvidia registers a lower performance impact when ray tracing is turned on when compared to AMD. It's likely that Intel will provide attendants with an overview of the hardware blocks responsible for ray tracing acceleration - and might provide us with an idea of where it could ultimately end (performance-wise) with its first-generation ray tracing implementation.

Another element in the performance equation is a first-mover advantage: game developers started coding for Nvidia's ray tracing implementation before AMD ever came out of the gate with its ray tracing-capable Radeon RX 6000 series. Intel has been hiring left and right to strengthen its driver software development and developer relations, and GDC will demonstrate what it has achieved while working closely with Hitman 3 developer IO Interactive. Under the title "Bringing 4K Ray-Traced Visuals to the World of HITMAN 3", the session will dive into Intel's and Io Interactive's efforts to add game support for Intel's graphics solutions found in its 12th Generation Core processors and Alchemist graphics cards. 

Interestingly, the session descriptive only refers to "higher quality reflection and shadows" - IO Interactive had already announced a ray tracing update that would add those technologies to Hitman 3's rendering engine. It seems that ray tracing will be limited to those particular rendering bits, with XeSS thrown in for a performance improvement at more demanding resolutions.

Intel already announced that it would leverage the open-source VulkanRT library to quickly onboard graphics developers in an attempt to accelerate past the initial adoption hurdles. However, it remains to be seen whether that will be enough to catch up to AMD and Nvidia, which certainly aren't resting on their ray tracing laurels. 

Francisco Pires
Freelance News Writer

Francisco Pires is a freelance news writer for Tom's Hardware with a soft side for quantum computing.

  • watzupken
    If there is anything I want to tell Intel, that will be to show us the product and cut the talking and teasing. They have shown a lot for the past few months, talked about it for the past year or more, but until now, the high end dedicated GPU remains a product on paper or lips. Whereas AMD and Nvidia have actual hardware and now features/ software to talk about.
    Reply