Skip to yearly menu bar Skip to main content


Poster

DictAS: A Framework for Class-Generalizable Few-Shot Anomaly Segmentation via Dictionary Lookup

Zhen Qu · Xian Tao · Xinyi Gong · ShiChen Qu · Xiaopei Zhang · Xingang Wang · Fei Shen · Zhengtao Zhang · Mukesh Prasad · Guiguang Ding


Abstract:

Recent vision-language models (e.g., CLIP) have demonstrated remarkable class-generalizable ability to unseen classes in few-shot anomaly segmentation (FSAS), leveraging supervised prompt learning or fine-tuning on seen classes. However, their ability to generalize across categories mainly relies on prior knowledge of real seen anomaly samples. In this paper, we propose a novel framework, namely DictAS, which enables a unified model to detect visual anomalies in unseen object categories without any retraining on the target data, only employing a few normal reference images as visual prompts. The insight behind DictAS is to transfer dictionary lookup capabilities to the FSAS task for unseen classes via self-supervised learning, instead of merely memorizing normal and abnormal feature patterns from the training set. Specifically, DictAS mainly consists of three components: (1) Dictionary Construction - to simulate the index and content of a real dictionary by building it with normal reference image features. (2) Dictionary Lookup - to retrieve queried region features from the dictionary using a sparse lookup strategy. When the queried feature cannot be successfully retrieved from the dictionary, it is classified as an anomaly. (3) Query Discrimination Regularization - to enhance anomaly discrimination by making abnormal features harder to retrieve from the dictionary. To achieve this, Contrastive Query Constraint and Text Alignment Constraint are further proposed. Extensive experiments on seven public industrial and medical datasets demonstrate that DictAS consistently outperforms state-of-the-art FSAS methods. Code will be released upon acceptance.

Live content is unavailable. Log in and register to view live content