Poster
DecAD: Decoupling Anomalies in Latent Space for Multi-Class Unsupervised Anomaly Detection
Xiaolei Wang · Xiaoyang Wang · Huihui Bai · ENG LIM · Jimin XIAO
Recent unsupervised distillation-based and reconstruction-based methods rely on the feature inconsistency of a frozen encoder and the corresponding learnable decoder to achieve anomaly localization. However, these methods have a critical limitation: decoders trained exclusively on normal samples unexpectedly well reconstruct abnormal features, leading to degraded detection performance. We identify this phenomenon as 'anomaly leakage' (AL): the decoder optimized by reconstruction loss tends to directly copy the encoded input, regardless of whether the input is a normal or abnormal feature. To address this challenge, we propose a novel framework that explicitly decouples encoded features into normal and abnormal components through a bounded invertible mapping in a prior latent space. Compared to previous methods, the invertible structure can eliminate anomalous information point-to-point without damaging the information of neighboring patches, improving reconstruction. Moreover, the framework suppresses the abnormal component before reconstructing features through inverse mapping. In this process, effective synthetic abnormal features are essential for training the decoupling process. Therefore, we propose to apply adversarial training to find suitable perturbations to simulate feature-level anomalies. Extensive experimental evaluations on benchmark datasets, including MVTec AD, VisA, and Real-IAD, demonstrate that our method achieves competitive performance compared to state-of-the-art approaches. The code will be made publicly available.
Live content is unavailable. Log in and register to view live content