Skip to yearly menu bar Skip to main content


Poster

Neural Architecture Search Driven by Locally Guided Diffusion for Personalized Federated Learning

PENG LIAO · Xilu Wang · Yaochu Jin · WenLi Du · Han Hu


Abstract:

Neural Architecture Search (NAS) has gained significant attention in personalized federated learning (PFL) due to its ability to automatically design tailored models for individual clients. While most existing NAS approaches for PFL perform architecture searches on the server side, client-side NAS—where architectures are optimized locally on clients—offers stronger privacy protection by eliminating the need to transmit sensitive model information. However, this paradigm remains underexplored and often suffers from suboptimal average client performance, primarily due to two limitations: (1) Inefficient client-side search strategies caused by data isolation and restricted access to local architectures across clients, and (2) slow supernet convergence arising from server aggregation and local supernet training. To address these challenges, we propose a Personalized Federated Stochastic Differential Equation-based NAS (PerFedSDE-NAS). To achieve effective local search, each client employs a guided diffusion model to generate promising personalized architectures tailored to local data characteristics, while a performance predictor based on radial basis functions is used to select only the most promising subset of architectures for evaluation. To accelerate supernet convergence, each client maintains a supernet with an archive-driven training mechanism, and a novel model aggregation strategy is proposed to further enhance weight convergence during FL rounds. We validate PerFedSDE-NAS across three NAS search spaces, including convolutional neural networks and transformers, demonstrating broad applicability. Compared to traditional fixed-model and NAS-based PFL approaches, our method achieves state-of-the-art performance.

Live content is unavailable. Log in and register to view live content