PPEFL: An Edge Federated Learning Architecture with Privacy-Preserving Mechanism

The emergence of federal learning makes up for some shortcomings of machine learning, and its distributed machine learning paradigm can effectively solve the problem of data islands, allowing users to collaboratively model without sharing data. Clients only need to train locally and upload model par...

Full description

Saved in:
Bibliographic Details
Published in:Wireless communications and mobile computing Vol. 2022; no. 1
Main Authors: Liu, Zhenpeng, Gao, Zilin, Wang, Jingyi, Liu, Qiannan, Wei, Jianhang
Format: Journal Article
Language:English
Published: Oxford Hindawi 2022
John Wiley & Sons, Inc
Subjects:
ISSN:1530-8669, 1530-8677
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:The emergence of federal learning makes up for some shortcomings of machine learning, and its distributed machine learning paradigm can effectively solve the problem of data islands, allowing users to collaboratively model without sharing data. Clients only need to train locally and upload model parameters. However, the computational power and resources of local users are frequently restricted, and ML consumes a large amount of computer resources and generates enormous communication consumption. Edge computing is characterized by low latency and low bandwidth, which makes it possible to offload complicated computing tasks from mobile devices and to execute them by the edge server. This paper is dedicated to reducing the communication cost of federation learning, improving the communication efficiency, and providing some privacy protection for it. An edge federation learning architecture with a privacy protection mechanism is proposed, which is named PPEFL. Through the cooperation of the cloud server, the edge server, and the edge device, there are two stages: the edge device and the edge server cooperate to complete the training and update of the local model, perform several lightweight local aggregations at the edge server, and upload to the cloud server and the cloud server aggregates the uploaded parameters and updates the global model until the model converges. The experimental results show that the architecture has good performance in terms of model accuracy and communication consumption and can well protect the privacy of edge federated learning.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 14
ISSN:1530-8669
1530-8677
DOI:10.1155/2022/1657558