Accelerating Federated Learning via Modified Local Model Update Based on Individual Performance Metric

The privacy-preserving federated learning (FL) algorithm is considered one of the most widely used distributed training algorithms. Its effectiveness is primarily observed when the datasets on the clients are independent, identically distributed (IID), and balanced. However, in real-world situations...

Celý popis

Uloženo v:
Podrobná bibliografie
Vydáno v:2023 3rd International Conference on Electrical, Computer, Communications and Mechatronics Engineering (ICECCME) s. 1 - 6
Hlavní autoři: Barhoush, Mahdi, Ayad, Ahmad, Schmeink, Anke
Médium: Konferenční příspěvek
Jazyk:angličtina
Vydáno: IEEE 19.07.2023
Témata:
On-line přístup:Získat plný text
Tagy: Přidat tag
Žádné tagy, Buďte první, kdo vytvoří štítek k tomuto záznamu!
Popis
Shrnutí:The privacy-preserving federated learning (FL) algorithm is considered one of the most widely used distributed training algorithms. Its effectiveness is primarily observed when the datasets on the clients are independent, identically distributed (IID), and balanced. However, in real-world situations, the datasets on the clients are often non-IID, leading to varied data and feature distributions, thereby adversely affecting the performance of federated learning. To address this challenge, this work applies a modified local model update mechanism based on the individual performance metric of all clients' models under training calculated on a reference testing dataset that resides on the server side. The proposed modification enhances each client model's performance individually, regardless of its data distribution. For instance, if a client model poorly predicts certain classes, it will receive a higher percentage and weighting factor from other models that perform better in predicting those classes. The empirical studies of the modified algorithm for FedAvg and FedProx aggregation methods under IId and non-IID data distribution with MNIST and CIFAR10 datasets show that this approach can speed up the training process, increase the overall system accuracy, and reduce the number of communication rounds.
DOI:10.1109/ICECCME57830.2023.10253070