Poster
Sibai: A Few-Shot Meta-Classifier for Poisoning Detection in Federated Learning
Melanie Götz · Torsten Krauß · Alexandra Dmitrienko
Federated Learning (FL) enables collaborative machine learning across decentralized clients without sharing raw data, which offers enhanced privacy and improved performance. However, FL is vulnerable to poisoning attacks, compromising model integrity through both untargeted performance degradation and targeted backdoor attacks. Detecting backdoors in FL is challenging due to their stealthy nature and variability in local datasets. Existing defenses struggle against adaptive adversaries and distinguishing between poisoning and genuine dataset anomalies. This paper introduces the Siamese Backdoor Inspector (Sibai), a novel meta-classifier-based poisoning defense for FL. Leveraging the staple few-shot learning technique of Siamese networks, Sibai effectively detects malicious contributions in various scenarios, including settings with strong variations between clients' datasets and encounters with adaptive adversaries. Sibai achieves high detection rates, prevents backdoors, minimizes performance impact, and outperforms eight recent defenses regarding F1 score, poisoning prevention, and consistency across complex scenarios.
Live content is unavailable. Log in and register to view live content