Types of Attacks against Federated Neural Networks and Protection Methods

Federated learning is a technology for privacy preserving learning in distributed storage systems. This kind of learning allows one to create a general predictive model while storing all the data in their own’ storage systems. Several devices take part in training the general model, and each device...

Full description

Saved in:
Bibliographic Details
Published in:Programming and computer software Vol. 51; no. 6; pp. 409 - 414
Main Authors: Kostenko, V. A., Selezneva, A. E.
Format: Journal Article
Language:English
Published: Moscow Pleiades Publishing 01.12.2025
Springer Nature B.V
Subjects:
ISSN:0361-7688, 1608-3261
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Federated learning is a technology for privacy preserving learning in distributed storage systems. This kind of learning allows one to create a general predictive model while storing all the data in their own’ storage systems. Several devices take part in training the general model, and each device has its own unique data on which the neural network is trained. The interaction between devices occurs only to adjust the weights of the general model. Training on multiple devices creates many attack opportunities against this type of network. After training on a local device, the model data is sent via some type of communication to a central server or global model. Therefore, vulnerabilities in a federated network are possible not only at the training stage on a separate device, but also at the data exchange stage. All this together increases the number of possible vulnerabilities of federated neural networks. As is known, not only neural networks, but also other models can be used to build federated classifiers. Therefore, the types of attacks directly against a network also depend on the type of model used. Federated neural networks have a rather complicated design, which differs from neural networks and other classifiers, which can be vulnerable to various types of attacks because training occurs on different devices, and both neural networks and simpler algorithms can be used. In addition, it is necessary to ensure data transfer between devices. All attacks come down to several main types that exploit classifier vulnerabilities. It is possible to implement protection against attacks by improving the architecture of the classifier itself and paying attention to data encryption.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:0361-7688
1608-3261
DOI:10.1134/S0361768825700276