Skip to yearly menu bar Skip to main content


Poster

Q-Norm: Robust Representation Learning via Quality-Adaptive Normalization

ying zhou · Lanning Zhang · Xidian University Fei · Hangzhou Institute of Technology, Xidian University Ziyun · KTH Royal Institute of Technology Maoying · University of Technology Sydney Jinlan · Hangzhou Dianzi University Nannan


Abstract:

Although deep neural networks have achieved remarkable success in various computer vision tasks, they face significant challenges in degraded image understanding due to domain shifts caused by quality variations. Drawing biological inspiration from the human visual system (HVS), which dynamically adjusts perception strategies through contrast gain control and selective attention to salient regions, we propose Quality-Adaptive Normalization (Q-Norm) - a novel normalization method that learns adaptive parameters guided by image quality features. Our approach addresses two critical limitations of conventional normalization techniques: 1) Domain Covariance Shift: Existing methods fail to align feature distributions across different quality domains. Q-Norm implicitly achieves cross-domain alignment through quality-aware parameter adaptation without explicit loss functions. 2) Biological Plausibility: By mimicking HVS's contrast normalization mechanisms and attention-based feature selection, Q-Norm dynamically adjusts the mean and variance parameters using a pre-trained quality assessment model, ensuring robustness to image degradation. Extensive experiments across multiple tasks (image classification, semantic segmentation, object detection) demonstrate that Q-Norm consistently outperforms baseline methods on low-quality images. Code will be made available after peer review.

Live content is unavailable. Log in and register to view live content