Skip to yearly menu bar Skip to main content


Poster

Task-Aware Prompt Gradient Projection for Parameter-Efficient Tuning Federated Class-Incremental Learning

Hualong Ke · Yachao Zhang · Jiangming Shi · FangyongWang FangyongWang · Yuan Xie · Yanyun Qu


Abstract:

Federated Continual Learning (FCL) has recently garnered significant attention due to its ability to continuously learn new tasks while protecting user privacy. However, existing Data-Free Knowledge Transfer (DFKT) methods require training the entire model, leading to high training and communication costs, while prompt pool-based methods with accessing other task-specific prompts in the pool may pose privacy leakage risk. To address these challenges, we propose a novel method: Task-aware Prompt gradient Projection and Replay (TPPR), which leverages visual prompts to build a parameter-efficient tuning architecture, thereby significantly reducing training and communication costs. Specifically, we propose the Task-Aware Prompt Gradient Projection (TAPGP) mechanism, from the perspective of protecting learned knowledge, to balance the learning of task-agnostic and task-specific knowledge in a pool-free manner. In practice, we make the gradient of the deep prompts orthogonal to the virtual data and prompts of preceding tasks, which prevents the erosion of old task knowledge while allowing the model to learn new information. Additionally, we introduce Dual-Level Prompt Replay (DLPR) based on exponential moving average to facilitate knowledge review at both inter-task and intra-task levels, effectively inheriting learned knowledge. Extensive experimental results demonstrate that our method effectively reduces model communication overhead and alleviates forgetting while fully protecting privacy. With only 1% of the training parameters, we achieve more than 5% accuracy improvements in all settings than SOTA with the same backbone.

Live content is unavailable. Log in and register to view live content