Skip to yearly menu bar Skip to main content


Poster

Learning Deblurring Texture Prior from Unpaired Data with Diffusion Model

Chengxu Liu · Lu Qi · Jinshan Pan · Xueming Qian · Ming-Hsuan Yang


Abstract:

Since acquiring large amounts of realistic blurry-sharp image pairs is difficult and expensive, learning blind image deblurring from unpaired data is a more practical and promising solution. Unfortunately, most existing approaches only use adversarial learning to bridge the gap from blurry domains to sharp domains, ignoring the complex and unpredictable nature of real-world blurry patterns. In this paper, we propose a novel diffusion model (DM)-based framework, dubbed TP-Diff, for image deblurring by learning spatially varying texture prior from unpaired sharp data. In particular, TP-Diff performs DM to generate the prior knowledge used to recover the texture of blurry images. To implement it, we propose a Texture Prior Encoder (TPE) that introduces a memory mechanism to encode the texture prior and thereby provide supervision for the DM training. To fully exploit the generated texture priors, we further present the Texture Transfer Transformer layer (TTformer), in which a novel Filter-Modulated Multi-head Self-Attention (FM-MSA) efficiently removes spatially varying blurring through adaptive filtering. In addition, a wavelet-based adversarial loss is used to preserve high-frequency texture details. Extensive evaluations demonstrate that TP-Diff provides a promising unsupervised deblurring solution and outperforms SOTA methods in six widely-used benchmarks.

Live content is unavailable. Log in and register to view live content