Intel Unveils ExtraSS Framework as a Competitor to DLSS and FSR

Intel has recently introduced a new technology known as ExtraSS, with the aim of enhancing image quality and performance, reminiscent of the functions of DLSS and FSR. The framework, which is centered around frame extrapolation, was outlined in a research paper presented at the SIGGRAPH Asia 2023 event in Sydney.

The paper, titled “ExtraSS: A Framework for Joint Spatial Super Sampling and Frame Extrapolation,” delineates a blend of two methods to elevate frame rates and image quality: spatial upscaling and frame extrapolation. This sets Intel’s technology apart from AMD’s FSR 3 and NVIDIA’s DLSS 3, both of which rely on frame interpolation. In contrast, Intel’s framework uses only previous frames, presenting both advantages and disadvantages.

The research paper delves into the complexities of achieving high-quality real-time rendering, discussing principles of spatial and temporal supersampling and current approaches such as DLSS, XeSS, and FSR. To compete with these methods, Intel proposes a unified architecture that integrates spatial upscaling with frame extrapolation, known as ExtraSS.

While frame interpolation typically leads to delays in the rendering process, requiring technologies like Nvidia’s Reflex or Anti-Lag+ to compensate, Intel’s framework eliminates the need for such measures. Although generating frames solely based on preceding frames can result in artefacts, the paper suggests mitigating these effects using motion vectors in combination with neural networks. Intel’s proposed framework employs a blend of spatial supersampling and frame extrapolation.

To enhance frame warping, Intel has developed G-buffer (geometry buffer) technology by reusing temporal and spatial information. The authors also recommend the use of motion vectors with G-buffer-directed information for spatial-temporal extrapolation, followed by a lightweight flow-based model to optimise moving shadings. The resulting ExtraSS framework accepts low-resolution rendered images as input, producing high-resolution images with refined outcomes. Through testing on Unreal Engine 4, Intel claims that the approach surpasses standard spatial or temporal-only supersampling methods in both performance and quality.

The paper concludes by acknowledging the framework’s limitations, particularly the potential for artefacts in complex scenarios. It is important to note that while Intel has not released any public blog posts about this technology, the research paper and patents do not guarantee the technology’s availability to end consumers.

In light of these developments, KitGuru remarks that ExtraSS could position Intel as a formidable challenger to DLSS and FSR frame generation technologies.

+ There are no comments

Add yours