Skip to yearly menu bar Skip to main content


Poster

A Tiny Change, A Giant Leap: Long-Tailed Class-Incremental Learning via Geometric Prototype Alignment

xinyi lai · Luojun Lin · Weijie Chen · yuanlong yu


Abstract:

Long-Tailed Class-Incremental Learning (LT-CIL) faces critical challenges due to biased gradient updates arising from imbalanced data distributions and the inherent stability-plasticity trade-off, which collectively degrade tail-class performance and induce catastrophic forgetting. To address these limitations, we introduce Geometric Prototype Alignment (GPA), a model-agnostic classifier initialization method that calibrates learning dynamics through geometric feature space alignment. GPA initializes classifier weights by aligning them with frozen class prototypes onto a unit hypersphere, explicitly disentangling magnitude imbalance from directional discriminability. During incremental training, we introduce Dynamic Anchoring to adjust weights while preserving geometric consistency, thereby balancing plasticity for new classes while keeping stability for previously learned knowledge. When integrated into state-of-the-art CIL frameworks such as LUCIR and DualPrompt, GPA demonstrates significant improvements: achieving an average incremental accuracy increase of 12.3% and decreasing forgetting rates by 12.2% on CIFAR100-LT. Theoretical analysis reveals that GPA accelerates convergence by 2.7x and achieves nearly Fisher-optimal decision boundaries. Our work lays a geometric foundation for stable representation learning in LT-CIL scenarios, which addresses both catastrophic forgetting and tail-class degradation.

Live content is unavailable. Log in and register to view live content