Nvidia's has opened the doors on it's new RTXGI SDK to allow Ray Traced Global Illumination on all GPU platforms and tweak it to any performance standard required by the developers.
Global Illumination is how games light themselves up, it's how you actually see anything in a video game in any realistic way. Typically you "pre-bake" your lighting where you run pre-calculated lighting runs and then "bake" them into the textures. But ray-tracing allows real time fully accurate lighting in game. The only problem is that it's usually reserved for ray-tracing accelerated cards and runs very very poorly on standard GPUs.
RTXGI's architecture allows it to run asynchronously to the game engine itself. What this means is it can update real time lighting calculations at a different frame rate than the game itself.
This allows game developers a ton of opportunities to tweak the game's performance numbers purely by changing how many times global illumination gets updated. (Say you want to run your video game at 60 FPS, but only update your ray traced GI at 30 FPS.)
Nvidia showed an impressive RTX 3080 RTXGI performance chart with the 3080 almost doubling the RTX 2080 Ti's sample rate in RTXGI workloads. They even show the GTX 1080 Ti running RTXGI, demonstrating that the SDK indeed can run on any GPU platform.
Because of this, you could run RTXGI on anything from a Radeon GPU to a Intel IGP or even an Xbox One or PS4, if a game developer ever decided to run this tech in a console game. But don't expect it to look as good as on a Turing or Ampere GPU.