Skip to yearly menu bar Skip to main content


Poster

FLSeg: Enhancing Privacy and Robustness in Federated Learning under Heterogeneous Data via Model Segmentation

Zichun Su · Zhi Lu · Yutong Wu · Renfei Shen · Songfeng Lu


Abstract: Federated Learning (FL) enables collaborative global model training without data sharing but facing critical challenges from privacy leakage and Byzantine attacks. Existing privacy-preserving robust FL frameworks suffer from three main limitations: high computational costs, restricted RAR usage, and inadequate handling of data heterogeneity. To address these limitations, we propose the FLSeg framework, which leverages Segment Exchange and Segment Aggregation to avoid excessive encryption computations while allowing unrestricted use of any RAR. Additionally, a regularization term in local training balances personalization with global model performance, effectively adapting to heterogeneous data. Our theoretical and experimental analyses demonstrate FLSeg’s semi-honest security and computational efficiency. FLSeg achieves client and server time complexities of $O(\ell)$ and $O(n\ell)$, with empirical results showing significantly reduced computational times, e.g., 233 ms for clients and 78 ms per client on the server, compared to ACORN (USENIX 23) at 1696 ms and 181 ms. Extensive experiments confirm FLSeg’s robustness across diverse heterogeneous and adversarial scenarios, e.g., achieving 64.59\% accuracy on non-IID CIFAR-10 with 20\% Min-Max attackers, compared to ACORN of 48.21\%.

Live content is unavailable. Log in and register to view live content