Understanding Distributed Poisoning Attack in Federated Learning

Federated learning is inherently vulnerable to poisoning attacks, since no training samples will be released to and checked by trustworthy authority. Poisoning attacks are widely investigated in centralized learning paradigm, however distributed poisoning attacks, in which more than one attacker col...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2019 IEEE 25th International Conference on Parallel and Distributed Systems (ICPADS) s. 233 - 239
Hlavní autoři: Cao, Di, Chang, Shan, Lin, Zhijian, Liu, Guohua, Sun, Donghong
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 01.12.2019
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:Federated learning is inherently vulnerable to poisoning attacks, since no training samples will be released to and checked by trustworthy authority. Poisoning attacks are widely investigated in centralized learning paradigm, however distributed poisoning attacks, in which more than one attacker colludes with each other, and injects malicious training samples into local models of their own, may result in a greater catastrophe in federated learning intuitively. In this paper, through real implementation of a federated learning system and distributed poisoning attacks, we obtain several observations about the relations between the number of poisoned training samples, attackers, and attack success rate. Moreover, we propose a scheme, Sniper, to eliminate poisoned local models from malicious participants during training. Sniper identifies benign local models by solving a maximum clique problem, and suspected (poisoned) local models will be ignored during global model updating. Experimental results demonstrate the efficacy of Sniper. The attack success rates are reduced to around 2% even a third of participants are attackers.
DOI:10.1109/ICPADS47876.2019.00042