Poster
Exploiting Diffusion Prior for Task-driven Image Restoration
Jaeha Kim · Junghun Oh · Kyoung Mu Lee
Task-driven image restoration (TDIR) has recently emerged to address performance drops in high-level vision tasks caused by low-quality (LQ) inputs. The goal of TDIR is to improve both visual quality and task performance. Previous TDIR methods struggle to handle practical scenarios in which images are degraded by multiple complex factors, leaving minimal clues for restoration. This leads us to leverage the diffusion prior, one of the most powerful image priors. However, while the diffusion prior can help generate visually plausible results, using it to restore task-relevant details remains challenging, even when combined with state-of-the-art TDIR methods. To address this, we propose EDTR, the first TDIR method that incorporates diffusion prior in ways that harness its strength to restore task-relevant details. Specifically, we propose directly leveraging useful clues from LQ images in the diffusion process by generating from pre-restored LQ images with mild noise added. Moreover, we suggest one-step denoising to prevent the generation of redundant details that dilute crucial task-related information. We demonstrate that our method effectively utilizes diffusion prior to restore task-relevant details, significantly enhancing task performance and visual quality across diverse tasks with complex degradations.
Live content is unavailable. Log in and register to view live content