Skip to yearly menu bar Skip to main content


Poster

Laboring on less labors: RPCA Paradigm for Pan-sharpening

honghui xu · Chuangjie Fang · Yibin Wang · Jie Wu · Jianwei Zheng


Abstract:

Deep unfolding network (DUN) based pansharpening has shed new light on high-resolution/spectrum image acquisition, serving as a computational alternative to physical devices. While with both merits of deep feature learning and acceptable interpretability enjoyed, current pansharpening necessitates substantial effort in approximating the degradation matrices along the spatial and spectral dimensions, yet with performance hardly guaranteed within the complex scenarios. Moreover, as a key step during DUN update, current solutions rely solely on black-box networks to learn the data-driven priors, which further results in laborious architecture crafting and compromised interpretability. To counteract the dilemmas, we propose a new solution, namely \textbf{R}PCA-based \textbf{U}nfolding \textbf{N}etwork (RUN), which shrinks the original two degradations to only one. Specifically, grounded in the significant sparsity of spatial offset components, \textit{i.e.}, the difference between upsampled image and the desired target, we shift the original pansharpening issue into a novel Robust Principal Component Analysis (RPCA)-based paradigm. On that basis, the tricky approximation to the spatial degradation matrix as well as its transposed counterpart is naturally avoided. Specific for the prior learning step of RPCA unfolding, an efficient Nonlinear transformation-based Tensor Nuclear Norm (NTNN) is meticulously engineered, in which the computationally intensive Singular Value Decomposition is avoided with the aid of depthwise convolutions. More importantly, NTNN plays a plug-and-play role and can be easily embedded into Transformer/CNN architectures for the learning of both global and local features. Experimental results on multiple remote datasets demonstrate the superiority of the proposal over previous SOTA methods. Representatively, with two formerly indispensable degradations omitted, a 0.899dB PSNR gain can still be achieved on the GF2 dataset.

Live content is unavailable. Log in and register to view live content