AMD research suggests plans to catch up to Nvidia using neural supersampling and denoising for real-time path tracing

Nvidia currently dominates the GPU market, thanks to a combination of performance, features, and brand recognition. Its advanced AI (Artificial Intelligence) and machine learning-based technologies have proven particularly potent, and AMD hasn’t really caught up, especially in the consumer market. But the company hopes to change that very soon.

According to a post on GPUOpen, AMD research is currently focused on achieving real-time path tracing on RDNA GPUs via neural network solutions. Nvidia uses its own DLSS for image upscaling with AI, but DLSS has come to mean a lot more than “Deep Learning Super Sampling” — there’s DLSS 2 upscaling, DLSS 3 frame generation, and DLSS 3.5 ray reconstruction. AMD’s latest research centers on neural denoising to clear up noisy images caused by using a limited number of ray samples in real-time path tracing — basically ray reconstruction, as far as we can tell.

Path tracing normally uses thousands or even tens of thousands of ray calculations per pixel. It’s the gold standard and what movies typically, often requiring hours per rendered frame. In effect, a scene gets rendered using calculated ray bounces where even a slight shift in the path taken can result in a different pixel color. Do that a lot and accumulate all of the resulting samples for each pixel, and eventually the quality of the result improves to an acceptable level.

To do path tracing in real-time, the number of samples per pixel needs to be drastically reduced. This results in more noise, as light rays frequently fail to hit certain pixels, leading to incomplete illumination that requires denoising. (Movies use custom denoising algorithms as well, incidentally, as even tens of thousands of samples doesn’t guarantee a perfect output.)

AMD aims to address this with a neural network that performs denoising while reconstructing scene details. Nvidia’s solution has been praised for preserving details that traditional rendering takes much longer to achieve. AMD hopes for similar gains by reconstructing path-traced details with a few samples per pixel.

Workflow of our Neural Supersampling and Denoising

Workflow of our Neural Supersampling and Denoising (Image credit: GPUOpen)

The innovation here is that AMD combines upscaling and denoising within a single neural network. In AMD’s own words, their approach “generates high-quality denoised and supersampled images at a higher display resolution than render resolution for real-time path tracing.” This unifies the process, allowing AMD’s method to replace several denoisers used in rendering engines plus doing upscaling in a single pass.

This research could potentially lead to a new version of AMD’s FSR (FidelityFX Super Resolution) that might match Nvidia’s performance and image quality standards. Nvidia’s DLSS technologies require dedicated AI hardware on RTX GPUs, along with an Optical Flow Accelerator for frame generation on RTX 40-series (and later) GPUs.

AMD’s current GPUs generally lack AI acceleration features, or in the case of RDNA 3, there are AI accelerators that share execution resources with the GPU shaders, but in a more optimized way for AI workloads. What’s not clear is whether AMD can run a neural network for denoising and upscaling on existing GPUs, or if it will require new processing clusters (i.e. tensor units). Achieving this on existing hardware would potentially allow a future FSR iteration to work across all GPUs, but it might also limit quality and other aspects of the algorithm.

We’ll need to wait and see what AMD ultimately delivers. A refined approach to neural path tracing and upscaling could bring accessible, high-fidelity graphics to a broader range of hardware, but given the demands of path tracing in games (see: Alan Wake 2, Black Myth Wukong, and Cyberpunk 2077 RT Overdrive), we suspect AMD will need much faster hardware than existing products to reach higher levels of image fidelity.