NVIDIA Develops New DLSS Control Technique

On Friday, NVIDIA released a blog post where they unveiled what was going on behind the scenes to craft the Control DLSS.

During our research, we found that certain temporal artifacts can be used to infer details in an image. Imagine, an artifact we’d normally classify as a “bug,” actually being used to fill in lost image details. With this insight, we started working on a new AI research model that used these artifacts to recreate details that would otherwise be lost from the final frame.
This AI research model has made tremendous progress and produces very high image quality. However, we have work to do to optimize the model’s performance before bringing it to a shipping game.
Leveraging this AI research, we developed a new image processing algorithm that approximated our AI research model and fit within our performance budget. This image processing approach to DLSS is integrated into Control, and it delivers up to 75% faster frame rates.

However, NVIDIA assumes that the image processing is short-lived in the handling of certain kinds of movement with the instance of a native 1080p vs 1080p DLSS in Control. As you can see below, the flames are not as well described in the DLSS as in the indigenous resolution:

In the future, the goal of NVIDIA for DLSS will be to optimize the AI research model so that it can be used to achieve playable frame rates.

Deep learning-based super resolution
 learns from tens of thousands of beautifully rendered sequences of images, rendered offline in a supercomputer at very low frame rates and 64 samples per pixel. Deep neural networks are then trained to recognize what beautiful images look like. Then these networks reconstruct them from lower-resolution, lower sample count images. The neural networks integrate incomplete information from lower resolution
 frames to create a smooth, sharp video, without ringing, or temporal artifacts
like twinkling and ghosting.

Let’s look at an example of our image processing algorithm vs. our AI research model. The video below shows a cropped Unreal Engine 4 scene of a forest fire with moving flames and embers. Notice how the image processing algorithm blurs the movement of flickering flames and discards most flying embers. In contrast, you’ll notice that our AI research model captures the fine details of these moving objects.
With further optimization, we believe AI will clean up the remaining artifacts in the image processing algorithm while keeping FPS high.
The new DLSS techniques available in Control are our best yet. We’re also continuing to invest heavily in AI super resolution to deliver the next level of image quality.
Our next step is optimizing our AI research model to run at higher FPS. Turing’s 110 Tensor teraflops are ready and waiting for this next round of innovation. When it arrives, we’ll deploy the latest enhancements to gamers via our Game Ready Drivers.

Between DLSS and NAS (NVIDIA Adaptive Shading), GeForce’s is obviously busy attempting to increase average efficiency in games, likely to make the ray tracing price more and more bearable for hardware.

Give a Comment