Skip to yearly menu bar Skip to main content


Poster

Boosting Dynamic Prototyping via Dual-Knowledge Clustering for Semi-Supervised Lifelong Person Re-Identification

Kunlun Xu · Fan Zhuo · Jiangmeng Li · Xu Zou · Jiahuan Zhou


Abstract:

Current lifelong person re-identification (LReID) methods predominantly rely on fully labeled data streams. However, in real-world scenarios where annotation resources are limited, a vast amount of unlabeled data coexists with scarce labeled samples, leading to the Semi-Supervised LReID (Semi-LReID) problem and making LReID methods suffer severe performance degradation. Despite the practical significance of Semi-LReID, it remains unexplored due to its inherent challenges. Existing LReID methods, even when combined with semi-supervised strategies, suffer limited long-term adaptation performance due to struggling with the noisy knowledge occurring during unlabeled data utilization, which hinders new knowledge acquisition and exacerbates catastrophic forgetting. In this paper, we pioneer the investigation of Semi-LReID, introducing a novel Self-Reinforcing PRototype Evolution with Dual-Knowledge Cooperation framework (SPRED). Our key innovation lies in establishing a self-reinforcing cycle between dynamic prototype-guided pseudo-label generation and new-old knowledge collaborative purification to enhance the utilization of unlabeled data. Specifically, learnable identity prototypes are introduced to dynamically capture the identity distributions as the pseudo-label evolves, then generate high-quality pseudo-labels, while dual-knowledge cooperation, which integrates current model specialization and historical model generalization, refines pseudo-labels by filtering out noisy information. Through this cyclic design, reliable pseudo-labels are progressively mined to improve current-stage learning and ensure positive knowledge propagation over long-term learning. Besides, a prototype structure-based knowledge distillation loss is developed to mitigate catastrophic forgetting, further boosting the long-term knowledge consolidation capacity. Extensive experiments on established Semi-LReID benchmarks demonstrate that our SPRED achieves state-of-the-art performance. Our code will be publicly available.

Live content is unavailable. Log in and register to view live content