Understanding Distributed Poisoning Attack in Federated Learning

Federated learning is inherently vulnerable to poisoning attacks, since no training samples will be released to and checked by trustworthy authority. Poisoning attacks are widely investigated in centralized learning paradigm, however distributed poisoning attacks, in which more than one attacker col...

Ausführliche Beschreibung

Gespeichert in:
Bibliographische Detailangaben
Veröffentlicht in:2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS) S. 233 - 239
Hauptverfasser: Cao, Di, Chang, Shan, Lin, Zhijian, Liu, Guohua, Sun, Donghong
Format: Tagungsbericht
Sprache:Englisch
Veröffentlicht: IEEE 01.12.2019
Schlagworte:
Online-Zugang:Volltext
Tags: Tag hinzufügen
Keine Tags, Fügen Sie den ersten Tag hinzu!
Beschreibung
Zusammenfassung:Federated learning is inherently vulnerable to poisoning attacks, since no training samples will be released to and checked by trustworthy authority. Poisoning attacks are widely investigated in centralized learning paradigm, however distributed poisoning attacks, in which more than one attacker colludes with each other, and injects malicious training samples into local models of their own, may result in a greater catastrophe in federated learning intuitively. In this paper, through real implementation of a federated learning system and distributed poisoning attacks, we obtain several observations about the relations between the number of poisoned training samples, attackers, and attack success rate. Moreover, we propose a scheme, Sniper, to eliminate poisoned local models from malicious participants during training. Sniper identifies benign local models by solving a maximum clique problem, and suspected (poisoned) local models will be ignored during global model updating. Experimental results demonstrate the efficacy of Sniper. The attack success rates are reduced to around 2% even a third of participants are attackers.
DOI:10.1109/ICPADS47876.2019.00042