Poster
Efficient Concertormer for Image Deblurring and Beyond
Pin-Hung Kuo · Jinshan Pan · Shao-Yi Chien · Ming-Hsuan Yang
The Transformer architecture has excelled in NLP and vision tasks, but its self-attention complexity grows quadratically with image size, making high-resolution tasks computationally expensive. We introduce {\ours}, featuring Concerto Self-Attention (CSA) for image deblurring. CSA splits self-attention into global and local components while retaining partial information in additional dimensions, achieving linear complexity. A Cross-Dimensional Communication module enhances expressiveness by linearly combining attention maps. Additionally, our gated-dconv MLP merges the two-staged Transformer design into a single stage. Extensive evaluations show our method performs favorably against state-of-the-art works in deblurring, deraining, and JPEG artifact removal. Code and models will be publicly available.
Live content is unavailable. Log in and register to view live content