Poster
Incremental Few-Shot Semantic Segmentation via Multi-Level Switchable Visual Prompts
Maoxian Wan · Kaige Li · Qichuan Geng · Weimin Shi · Zhong Zhou
Existing incremental few-shot semantic segmentation (IFSS) methods often learn novel classes by fine-tuning parameters from previous stages. This inevitably reduces the distinguishability of old class features, leading to catastrophic forgetting and overfitting to limited new samples. In this paper, we propose a novel prompt-based IFSS method with a visual prompt pool to store and switch multi-granular knowledge across stages, enhancing the model's ability to learn new classes. Specifically, we introduce three levels of prompts: 1) Task-persistent prompts: capturing generalizable knowledge shared across stages, such as foreground-background distributions, to ensure consistent recognition guidance; 2) Stage-specific prompts: adapting to the unique requirements of each stage by integrating its discriminative knowledge (e.g., shape difference) with common knowledge from previous stages; and 3) Region-unique prompts: encoding category-specific structures (e.g., edges) to more accurately guide the model to retain local details. In particular, we introduce a prompt switching mechanism that adaptively allocates the knowledge required for base and new classes, avoiding interference between prompts and preventing catastrophic forgetting and reducing the increasing computation. Our method achieves a new state-of-the-art performance, outperforming previous SoTA methods by 30.28\% mIoU-N on VOC and 13.90\% mIoU-N on COCO under 1-shot setting.
Live content is unavailable. Log in and register to view live content