Types of Attacks against Federated Neural Networks and Protection Methods

Federated learning is a technology for privacy preserving learning in distributed storage systems. This kind of learning allows one to create a general predictive model while storing all the data in their own’ storage systems. Several devices take part in training the general model, and each device...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Programming and computer software Ročník 51; číslo 6; s. 409 - 414
Hlavní autori: Kostenko, V. A., Selezneva, A. E.
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Moscow Pleiades Publishing 01.12.2025
Springer Nature B.V
Predmet:
ISSN:0361-7688, 1608-3261
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:Federated learning is a technology for privacy preserving learning in distributed storage systems. This kind of learning allows one to create a general predictive model while storing all the data in their own’ storage systems. Several devices take part in training the general model, and each device has its own unique data on which the neural network is trained. The interaction between devices occurs only to adjust the weights of the general model. Training on multiple devices creates many attack opportunities against this type of network. After training on a local device, the model data is sent via some type of communication to a central server or global model. Therefore, vulnerabilities in a federated network are possible not only at the training stage on a separate device, but also at the data exchange stage. All this together increases the number of possible vulnerabilities of federated neural networks. As is known, not only neural networks, but also other models can be used to build federated classifiers. Therefore, the types of attacks directly against a network also depend on the type of model used. Federated neural networks have a rather complicated design, which differs from neural networks and other classifiers, which can be vulnerable to various types of attacks because training occurs on different devices, and both neural networks and simpler algorithms can be used. In addition, it is necessary to ensure data transfer between devices. All attacks come down to several main types that exploit classifier vulnerabilities. It is possible to implement protection against attacks by improving the architecture of the classifier itself and paying attention to data encryption.
Bibliografia:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0361-7688
1608-3261
DOI:10.1134/S0361768825700276