Deep Model Poisoning Attack on Federated Learning

Federated learning is a novel distributed learning framework, which enables thousands of participants to collaboratively construct a deep learning model. In order to protect confidentiality of the training data, the shared information between server and participants are only limited to model paramet...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Future internet Ročník 13; číslo 3; s. 73
Hlavní autori: Zhou, Xingchen, Xu, Ming, Wu, Yiming, Zheng, Ning
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Basel MDPI AG 01.03.2021
Predmet:
ISSN:1999-5903, 1999-5903
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Federated learning is a novel distributed learning framework, which enables thousands of participants to collaboratively construct a deep learning model. In order to protect confidentiality of the training data, the shared information between server and participants are only limited to model parameters. However, this setting is vulnerable to model poisoning attack, since the participants have permission to modify the model parameters. In this paper, we perform systematic investigation for such threats in federated learning and propose a novel optimization-based model poisoning attack. Different from existing methods, we primarily focus on the effectiveness, persistence and stealth of attacks. Numerical experiments demonstrate that the proposed method can not only achieve high attack success rate, but it is also stealthy enough to bypass two existing defense methods.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1999-5903
1999-5903
DOI:10.3390/fi13030073