Fast deep autoencoder for federated learning

•A fast and privacy-preserving training algorithm for deep autoencoders is proposed.•The encoder block of the autoencoder is trained using Singular Value Decomposition.•The decoder block of the autoencoder is trained layer by layer.•The non-iterative nature of the proposed training algorithm makes i...

Celý popis

Uložené v:
Podrobná bibliografia
Vydané v:Pattern recognition Ročník 143; s. 109805
Hlavní autori: Novoa-Paradela, David, Fontenla-Romero, Oscar, Guijarro-Berdiñas, Bertha
Médium: Journal Article
Jazyk:English
Vydavateľské údaje: Elsevier Ltd 01.11.2023
Predmet:
ISSN:0031-3203
On-line prístup:Získať plný text
Tagy: Pridať tag
Žiadne tagy, Buďte prvý, kto otaguje tento záznam!
Popis
Shrnutí:•A fast and privacy-preserving training algorithm for deep autoencoders is proposed.•The encoder block of the autoencoder is trained using Singular Value Decomposition.•The decoder block of the autoencoder is trained layer by layer.•The non-iterative nature of the proposed training algorithm makes it faster than other approaches.•The method is especially suitable for federated learning and edge computing environments. This paper presents a novel, fast and privacy preserving implementation of deep autoencoders. DAEF (Deep AutoEncoder for Federated learning), unlike traditional neural networks, trains a deep autoencoder network in a non-iterative way, which drastically reduces training time. Training can be performed incrementally, in parallel and distributed and, thanks to its mathematical formulation, the information to be exchanged does not endanger the privacy of the training data. The method has been evaluated and compared with other state-of-the-art autoencoders, showing interesting results in terms of accuracy, speed and use of available resources. This makes DAEF a valid method for edge computing and federated learning, in addition to other classic machine learning scenarios.
ISSN:0031-3203
DOI:10.1016/j.patcog.2023.109805