Neural Super-Resolution for Real-time Rendering with Radiance Demodulation
CVPR 2024

  • 1Shandong University
  • 2Nanjing University
  • 3The Hong Kong Polytechnic University
*Denotes corresponding authors

Abstract

overview

It is time-consuming to render high-resolution images in applications such as video games and virtual reality, and thus super-resolution technologies become increasingly popular for real-time rendering. However, it is challenging to preserve sharp texture details, keep the temporal stability and avoid the ghosting artifacts in real-time super-resolution rendering. To address this issue, we introduce radiance demodulation to separate the rendered image or radiance into a lighting component and a material component, considering the fact that the light component is smoother than the rendered image so that the highresolution material component with detailed textures can be easily obtained. We perform the super-resolution on the lighting component only and re-modulate it with the highresolution material component to obtain the final superresolution image with more texture details. A reliable warping module is proposed by explicitly marking the occluded regions to avoid the ghosting artifacts. To further enhance the temporal stability, we design a frame-recurrent neural network and a temporal loss to aggregate the previous and current frames, which can better capture the spatialtemporal consistency among reconstructed frames. As a result, our method is able to produce temporally stable results in real-time rendering with high-quality details, even in the challenging 4 × 4 super-resolution scenarios.

6 × 6 Results

4 × 4 Results

2 × 2 Results


Citation

Acknowledgements

The website template was borrowed from BakedSDF.