Poster
Unlocking the Potential of Diffusion Priors in Blind Face Restoration
Yunqi Miao · Zhiyu Qu · Mingqi Gao · Changrui Chen · Jifei Song · Jungong Han · Jiankang Deng
Although diffusion prior is rising as a powerful solution for blind face restoration (BFR), the inherent gap between the vanilla diffusion model and BFR settings hinders its seamless adaptation. The gap mainly stems from the discrepancy between 1) high-quality (HQ) and low-quality (LQ) images and 2) synthesized and real-world images.The vanilla diffusion model is trained on images with no or less degredations, while BFR handles moderately to severely degraded images.Additionally, LQ images used for training are synthesized by a naive degradation model with limited degradation patterns, which fails to simulate the complex and unknown degradations in real-world scenarios.In this work, we use a unified network FLIPNET that switches between two modes to address specific gaps.In restoration mode, the model gradually integrates BFR-oriented features and face embeddings from LQ images to achieve authentic and faithful face restoration.In degradation mode, the model synthesizes real-world like degraded images based on the knowledge learned from real-world degradation datasets.Extensive evaluations on benchmark datasets show that our model 1) outperforms previous diffusion prior based BFR methods in terms of authenticity and fidelity, and 2) outperforms the naive degradation model in modeling the real-world degradations.
Live content is unavailable. Log in and register to view live content